Security
Headlines
HeadlinesLatestCVEs

Headline

GenAI in Cybersecurity: Insights Beyond the Verizon DBIR

The lack of abundant data on AI-enabled attacks in official reports shouldn’t prevent us from preparing for and mitigating potential future threats.

DARKReading
#web#mac#git#intel#auth

Source: marcos alvarado via Alamy Stock Photo

COMMENTARY

The Verizon “Data Breach Investigations Report” (DBIR) is a highly credible annual report that provides valuable insights into data breaches and cyber threats, based on analysis of real-world incidents. Professionals in cybersecurity rely on this report to help inform security strategies based on trends in the evolving threat landscape. However, the 2024 DBIR has raised some interesting questions, particularly regarding the role of generative AI in cyberattacks.

The DBIR Stance on Generative AI

The authors of the latest DBIR state that researchers “kept an eye out for any indications of the use of the emerging field of generative artificial intelligence (GenAI) in attacks and the potential effects of those technologies, but nothing materialized in the incident data we collected globally.”

While I have no doubt this statement is accurate based on Verizon’s specific data collection methods, it is in stark contrast to what we are seeing in the field. The main caveat to Verizon’s blanket statement on GenAI is in the 2024 DBIR appendix, where there is a mention of a Secret Service investigation that demonstrated GenAI as a “critically enabling technology” for attackers who didn’t speak English.

However, at SlashNext, we’ve observed that the real impact of GenAI on cyberattacks extends well beyond this one use case. Below are six different use cases that we have seen “in the wild.”

Six Use Cases of Generative AI in Cybercrime****1. AI-Enhanced Phishing Emails

Threat researchers have observed cybercriminals sharing guides on how to use GenAI and translation tools to improve the efficacy of phishing emails. In these forums, hackers suggest using ChatGPT to generate professional-sounding emails and provide tips for non-native speakers to create more convincing messages. Phishing is already one of the most prolific attack types and, even according to Verizon’s DBIR, it takes only, on average, 21 seconds for a user to click on a malicious link in a phishing email once the email is opened, and only another 28 seconds for the user to give away their data. Attackers leveraging GenAI to craft phishing emails only makes these attacks more convincing and effective.

2. AI-Assisted Malware Generation

Attackers are exploring the use of AI to develop malware, such as keyloggers that can operate undetected in the background. They are asking WormGPT, an AI-based large language model (LLM), to help them create a keylogger using Python as a coding language. This demonstrates how cybercriminals are leveraging AI tools to streamline and enhance their malicious activities. By using AI to assist in coding, attackers can potentially create more sophisticated and harder-to-detect malware.

3. AI-Generated Scam Websites

Cybercriminals are using neural networks to create a series of scam webpages, or “turnkey doorways,” designed to redirect unsuspecting victims to fraudulent websites. These AI-generated pages often mimic legitimate sites but contain hidden malicious elements. By leveraging neural networks, attackers can rapidly produce large numbers of convincing fake pages, each slightly different to evade detection. This automated approach allows cybercriminals to cast a wider net, potentially ensnaring more victims in their phishing schemes.

4. Deepfakes for Account Verification Bypass

SlashNext threat researchers have observed vendors on the Dark Web offering services that create deepfakes to bypass account verification processes for banks and cryptocurrency exchanges. These are used to circumvent “know your customer” (KYC) guidelines. This alarming trend shows how AI-generated deepfakes are evolving beyond social engineering and misinformation campaigns into tools for financial fraud. Criminals are using advanced AI to create realistic video and audio impersonations, fooling security systems that rely on biometric verification.

5. AI-Powered Voice Spoofing

Cybercriminals are sharing information on how to use AI to spoof and clone voices for use in various cybercrimes. This emerging threat leverages advanced machine-learning algorithms to recreate human voices with startling accuracy. Attackers can potentially use these AI-generated voice clones to impersonate executives, family members, or authority figures in social engineering attacks. For instance, they might make fraudulent phone calls to authorize fund transfers, bypass voice-based security systems, or manipulate victims into revealing sensitive information.

6. AI-Enhanced One-Time Password Bots

AI is being integrated into one-time password (OTP) bots to create templates for voice phishing. These sophisticated tools include features like custom voices, spoofed caller IDs, and interactive voice response systems. The custom voice feature allows criminals to mimic trusted entities or even specific individuals, while spoofed caller IDs lend further credibility to the scam. The interactive voice response systems add an extra layer of realism, making the fake calls nearly indistinguishable from legitimate ones. This AI-powered approach not only increases the success rate of phishing attempts but also makes it more challenging for security systems and individuals to detect and prevent such attacks.

While I agree with the DBIR that there is a lot of hype surrounding AI in cybersecurity, it’s crucial not to dismiss the potential impact of generative AI on the threat landscape. The anecdotal evidence presented above demonstrates that cybercriminals are actively exploring and implementing AI-powered attack methods.

Looking Ahead

Organizations must take a proactive stance on AI in cybersecurity. Even if the volume of AI-enabled attacks is currently low in official datasets, our anecdotal evidence suggests that the threat is real and growing. Moving forward, it’s essential to do the following:

  • Stay informed about the latest developments in AI and cybersecurity

  • Invest in AI-powered security solutions that can demonstrate clear benefits

  • Continuously evaluate and improve security processes to address evolving threats

  • Be vigilant about emerging attack vectors that leverage AI technologies

While we respect the findings of the DBIR, we believe that the lack of abundant data on AI-enabled attacks in official reports shouldn’t prevent us from preparing for and mitigating potential future threats — particularly since GenAI technologies have become widely available only within the past two years. The anecdotal evidence we’ve presented underscores the need for continued vigilance and proactive measures.

About the Author

Field CTO, SlashNext

J. Stephen Kowski is an experienced global technical leader with a demonstrated history in the cybersecurity, design, engineering, information technology, and manufacturing industries. In his current role as field chief technology officer (CTO) of SlashNext, he provides sales enablement for internal and external partners, and he represents the “Voice of the Customer” to senior leadership and product and engineering teams. Stephen has a background in information technology, Web development, engineering, robotics, sales, and software development. He is a graduate of the Stetson University College of Law, the University of South Carolina College of Engineering and Computing, and the University of South Florida College of Engineering. When he isn’t helping companies fight threat actors, he helps run two high school K-12 education charities focused on getting kids excited about STEM career pathways through the vehicle of competitive robotics in his community, and was one of the charities’ original founders 20 years ago.

DARKReading: Latest News

Iranian APT Group Targets IP Cameras, Extends Attacks Beyond Israel