Security
Headlines
HeadlinesLatestCVEs

Headline

AI Chatbot Fools Scammers & Scores Money-Laundering Intel

Experiment demonstrates how AI can turn the tables on cybercriminals, capturing bank account details of how scammers move stolen funds around the world.

DARKReading
#vulnerability#mac#intel#auth

Source: tanit boonruen via Alamy Stock Photo

Responding to scammers’ emails and text messages typically has been the fodder of threat researchers, YouTube stunts, and even comedians.

Yet one experiment using conversational AI to answer spam messages and engage fraudsters in conversations has shown that large language models (LLMs) can interact with cybercriminals, gleaning threat intelligence by diving down the rabbit hole of financial fraud — an effort that usually requires a human threat analyst.

Over the past two years, researchers at UK-based fraud-defense firm Netcraft used a chatbot based on Open AI’s ChatGPT to respond to scams and convince cybercriminals to part with sensitive information: specifically, banks account numbers at more than 600 financial institutions spanning 73 different countries that are used to transfer stolen money.

Overall, the technique allows threat analysts to extract more details about the infrastructure used by cybercriminals to con people out of their money, says Robert Duncan, vice president of product strategy for Netcraft.

“We’re effectively using AI to emulate a victim, so we play along with the scam to get to the ultimate goal, which typically [for the scammer] is to receive money in some form,” he says. “It’s proven remarkably robust at adapting to different types of criminal activity … changing behavior between something like a romance scam, which might last months, [and] advanced fee fraud — where you get to the end of it very quickly.”

As international fraud rings are profiting from scams — especially romance and investment fraud operating out of cyber-scam centers in Southeast Asia — defenders are searching for ways to expose cybercriminals’ financial and infrastructure components and shut them down. Countries, such as the United Arab Emirates, have embarked on partnerships to develop AI in ways that can improve cybersecurity. Using AI chatbots could shift the technological advantage from attackers back to defenders, a form of proactive cyber defense.

Personas With Local Languages

Netcraft’s research shows that AI chatbots could help curb cybercrime by forcing cybercriminals to work harder. Currently, cybercriminals and fraudsters use mass email and text-messaging campaigns to cast a wide net, hoping to catch a few credulous victims from which to steal money.

The two-year research project uncovered thousands of accounts linked to fraudsters. While Duncan would not reveal the name of the banks, the scammers’ accounts were mainly in the United States and the United Kingdom — likely because the personas donned by the AI chatbots were from those regions as well. Financial fraud works better when using bank accounts in the same country as the victim, he says.

The company is already seeing that distribution change, however, as it adds more languages to its chatbot’s capabilities.

“When we spin up some new personas in Italy, we’re now seeing more Italian accounts coming in, so it’s really a function of where we’re running these personas and what language we’re having them speak in,” he says.

The promise of using AI chatbots to engage with scammers and cybercriminals is that machines can conduct such conversations at scale. Netcraft has bet on the technology as a way to acquire threat intelligence that would not otherwise be available, announcing its Conversational Scam Intelligence service at the RSA Conference in May.

AI on AI

Typically, scammers attempt to convince the victims to buy cryptocurrency or gift cards as the preferred way of payment, but eventually hand over bank account information, according to Netcraft. The goal in using an AI chatbot is to keep the conversation going long enough to reach those milestones. The average conversation results in cybercriminals sending 32 messages and the chatbot issuing 15 replies.

When the AI chatbot system succeeds, it can harvest important threat data from cybercriminals. In one case, a scammer promising an inheritance of $5 million to the “victim” sent information on 17 different accounts at 12 different banks in an attempt to complete the transfer of an initial fee. Other fraudsters have impersonated specific banks, such as Deutsche Bank and the Central Bank of Nigeria, to convince the “victim” to transfer money. The chatbot duly collected all the information.

While Netcraft’s current focus with the experiment is to gain in-depth threat intelligence, the platform could be operationalized to engage fraudsters on a larger scale, flipping the current asymmetry between attackers and defenders. Rather than attackers using automation to increase the workload on defenders, a conversational system could widely engage cybercriminals, forcing them to have to figure out which conversations are real and which are not.

Such an approach holds promise, especially since attackers are starting to adopt AI in new ways as well, Duncan says.

“We’ve definitely seen indicators that attackers are sending texts that resemble the type of texts that ChatGPT puts out,” he says. “Again, it’s very hard to be certain, but we would be very surprised if we weren’t already talking back to AI, and essentially we have an AI-on-AI conversation.”

About the Author(s)

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

DARKReading: Latest News

2 Zero-Day Bugs in Microsoft's Nov. Update Under Active Exploit