Security
Headlines
HeadlinesLatestCVEs

Headline

Meta Finally Breaks Its Silence on Pig Butchering

The company gave details for the first time on its approach to combating organized criminal networks behind the devastating scams.

Wired
#web#intel#auth

Years into the escalating, multibillion-dollar global crisis of pig butchering scams, social media giant Meta released information for the first time on Thursday about its approach to combating the forced-labor compounds that fuel scam activity on its platforms and across the web.

The company says that for more than two years it has been focused on collaborating with global law enforcement and other tech companies to address the underlying problem of organized crime syndicates driving scam activity in Southeast Asia and the United Arab Emirates.

Meta says that so far this year it has done takedowns of more than 2 million accounts connected to scam compounds in Myanmar, Laos, Cambodia, the Philippines, and the UAE. The company has also been collaborating with external experts, including tech companies, NGOs, and coalitions working to counter online scams. As pig butchering scams generate significant revenue for criminals, though, and spread around the world, Meta notes that it has focused on working with law enforcement to directly track criminal syndicates.

“This is a highly adversarial space where we expect well-resourced and persistent criminal organizations to constantly evolve their tactics (both online and offline) in response to detection and enforcement to try and reconstitute across the internet,” a Meta spokesperson wrote in a statement. The company declined to share how many accounts it had removed prior to this year.

Longtime pig butchering researchers say that Meta has been slow to publicly and directly acknowledge the problem and the role its many platforms play in connecting scammers with potential victims. They emphasize that Meta’s services are far from the only platforms that scammers use to reach victims. But platforms like Facebook and Instagram are recognizable and trusted around the world, so it is inevitable, the researchers say, that scammers will gravitate to them. And Meta has warned its users broadly about investment and romance scams.

“I’m glad that Meta is finally starting to talk about this work, but in the research community, we feel like we’ve been trying to get their attention for a long time and collaborate with them and they often aren’t engaging with us,” says Ronnie Tokazowski, a longtime pig butchering researcher and cofounder of the nonprofit Intelligence for Good.

Since roughly 2020, when the earliest pig butchering scams started to emerge, more than 200,000 people have been trafficked and held in compounds—mostly in Myanmar, Cambodia, and Laos—where they are forced to play the role of an online scammer. If they refuse, the criminals who own the scam compounds, which are typically connected to Chinese organized crime, often beat or torture them. People have been trafficked from more than 60 countries around the world—often after seeing online ads promising them jobs that are too good to be true.

The forced scammers are compelled to send thousands of online messages to potential victims around the world on a daily basis. They are tasked with building relationships, often with the lure of friendship or romance, and eventually persuade their victims to send them money as part of lucrative “investment opportunities.” Individually, victims have lost hundreds of thousands of dollars, while the pig butchering criminal enterprises have collectively conned people out of around $75 billion in recent years.

“These scams can start on dating apps, text message, email, social media, or messaging apps, then ultimately move to scammer-controlled accounts on crypto apps or scam websites masquerading as investment platforms,” Meta writes in its report. “In addition to disrupting scam centers, teams across Meta are constantly rolling out new product features to help protect people on our apps from known scam tactics at scale.”

Pig butchering scams drive toward financial theft, but they start with either one-to-one cold communication between scammers and potential victims or contact that originates from social media groups or other communal forums. For example, Gary Warner, director of intelligence at the cybersecurity firm DarkTower, says that he tracks thousands of Facebook groups dedicated to luring people into cryptocurrency investment scams as well as groups that purport to be community dating resources where scammers are lurking.

Online moderation of scammers is a difficult and longstanding issue for Big Tech. As is the case with many types of inauthentic content, some pig butchering activity can skirt tech company standards—even when they are doing a large number of account takedowns—because the content isn’t explicit enough to meet the criteria for removal.

"So much of what is on platform is clearly the prelude to pig butchering, but Meta says it ‘doesn’t violate community standards,’” Warner says.

Meta emphasizes, though, that in addition to its account takedowns and monitoring for scam activity on its platforms, the company is focused on working to directly combat scam compounds using its policies around dangerous organizations and individuals, as well as wider safety policies.

However, the owners of scam compounds have been quick to adopt new technologies into their operations, partly to make their scamming more efficient, but likely also to evade law enforcement and technology clampdowns. In recent months, researchers have spotted pig butchering scammers using artificial intelligence tools, integrating deepfakes into their campaigns, and using malware to expand their capabilities. For example, scammers are now able to easily generate understandable content in many languages using AI translation tools for both the scripts and messages they send potential victims, as well as job advertisements luring prospective workers into scam compounds.

Meta says one recent scam compound that it took action against, which was targeting Japanese and Chinese speakers, followed a tip from OpenAI threat researchers who had spotted the criminal operation using ChatGPT to translate messages that could be used in pig butchering.

A “cluster” of accounts that all appeared to come from Cambodia were spotted translating and generating comments using ChatGPT, OpenAI spokesperson Liz Bourgeois tells WIRED. The comments generated using AI would be shared on social media and appeared to be linked to scam activity. Bourgeois says OpenAI banned the accounts and reported the social media activity to Meta.

Wired: Latest News

Russia’s Ballistic Missile Attack on Ukraine Is an Alarming First