Headline
LLM Hijackers Quickly Incorporate DeepSeek API Keys
The secret use of other people’s generative AI platforms, wherein hijackers gain unauthorized access to an LLM while someone else foots the bill, is getting quicker and stealthier by the month.
Source: Sergio Delle Vedove via Alamy Stock Photo
Sophisticated “LLMjacking” operations have obtained stolen access to DeepSeek models, just weeks after their public release.
LLMjacking, like proxyjacking and cryptojacking, involves the illicit use of someone else’s computing resources for one’s own purposes. In this case, it’s individuals using popular and otherwise expensive large language models (LLMs) from OpenAI, Anthropic, etc., to generate images, circumvent national bans, and more, while passing the bill along to someone else.
Most recently, researchers from Sysdig observed hyperactive LLMjacking operations integrating access to models developed by DeepSeek. After the company released its DeepSeek-V3 model on Dec. 26, it only took LLMjackers a few days to obtain stolen access. Similarly, DeepSeek-R1 was released on Jan. 20, and attackers had it in their hands the very next day.
“This isn’t just a fad anymore,” Sysdig cybersecurity strategist Crystal Morin says of LLMjacking. “This is far beyond where it was when we first discovered it last May.”
How LLMjacking Works
At scale, LLM usage can grow rather expensive. For instance, according to Sysdig’s back-of-the-envelope calculations, 24/7 usage of GPT-4 could cost an account holder north of half a million dollars (though DeepSeek, at present, is orders of magnitude less expensive).
Related:Researcher Outsmarts, Jailbreaks OpenAI’s New o3-mini
In order to enjoy these models without having to incur their costs, attackers steal credentials for cloud services accounts, or application programming interface (API) keys associated with specific LLM apps. Then, they use scripts to verify that these do in fact provide access to a desired model.
Next, they incorporate that stolen authentication information into an “OAI” reverse proxy (ORP). ORPs bridge the user and the LLM, providing a layer of operational security.
The apparent forefather of ORPs, from which the name derives, was published on April 11, 2023. It has since been forked and configured on numerous occasions to incorporate new stealth features. Newer versions have incorporated password protections and obfuscation mechanisms — like making its website illegible until users disable CSS in their browsers — and eliminated prompt logging, covering up attackers’ footsteps as they use the models. Proxies are further protected by Cloudflare tunnels, which generate random and temporary domains to shield the ORPs’ actual virtual private server (VPS) or IP addresses.
New 4chan and Discord communities have flourished around ORPs, as people use illicit LLM access to generate NSFW content and imagery of other kinds, scripts of varying maliciousness, or just everyday stuff, like essays for school. And in countries like Russia, Iran, and China, regular people use ORPs to circumvent national bans on ChatGPT.
Related:’Constitutional Classifiers’ Technique Mitigates GenAI Jailbreaks
The Cost of LLMjacking to Account Holders
Somebody, in the end, is going to pay for all computing resources used to generate NSFW images and school papers.
ORP developers don’t want these bills to be too high, necessarily, or else their users’ anomalous activity will more than likely raise alarms. To account for this, they build their programs on dozens, or even hundreds of different sets of credentials associated with different accounts. One ORP Sysdig recorded, for example, had incorporated 55 separate DeepSeek API keys, in addition to those associated with other artificial intelligence (AI) apps. By possessing many keys across many apps, ORPs can perform load balancing, spreading illicit usage as thinly as possible.
It doesn’t always work out this way, though.
As Morin recalls, “I spoke a little bit with a Twitter user whose personal AWS account was compromised through LLMjacking. He woke up one morning and his $2 average monthly AWS bill — he [mainly] used it for email — spiked to $730 in two or three hours.”
Related:AI Malware Dressed Up as DeepSeek Packages Lurk in PyPi
Source: Crystal Morin via LinkedIn
Nobody knows exactly how the victim had his AWS credentials swiped, but he was already on his way to racking up a $20,000-plus bill. His lucky break was having cost alerts toggled on in AWS — they aren’t on by default — allowing him to spot the anonymous activity early.
“He reached out to AWS customer support and asked them what was going on, and they had no idea. He did end up shutting off his account almost immediately, but there was a delay in the reporting of the cost. It ended up being, I think, between $10,000 to $20,000 total for about half a day’s usage,” Morin says.
AWS did end up bailing out the victim. Still, Morin warns, “You can imagine what a similar attack would do on an enterprise level, considering what could happen to just a single person.”
About the Author
Nate Nelson is a writer based in New York City. He formerly worked as a reporter at Threatpost, and wrote “Malicious Life,” an award-winning Top 20 tech podcast on Apple and Spotify. Outside of Dark Reading, he also co-hosts “The Industrial Security Podcast.”