Headline
Microsoft Disrupts Storm-2139 for LLMjacking and Azure AI Exploitation
Microsoft exposes Storm-2139, a cybercrime network exploiting Azure AI via LLMjacking. Learn how stolen API keys enabled harmful…
Microsoft exposes Storm-2139, a cybercrime network exploiting Azure AI via LLMjacking. Learn how stolen API keys enabled harmful content generation and Microsoft’s legal action
Microsoft has taken legal action against a cybercriminal network, known as Storm-2139, responsible for exploiting vulnerabilities within its Azure AI services. The company publicly identified and condemned four individuals central to this illicit operation. Their names are as follows:
- Arian Yadegarnia aka “Fiz” of Iran
- Phát Phùng Tấn aka “Asakuri” of Vietnam
- Ricky Yuen aka “cg-dot” of Hong Kong, China
- Alan Krysiak aka “Drago” of the United Kingdom
According to Microsoft’s official report, shared exclusively with Hackread.com, these individuals used various online aliases to operate a scheme called LLMjacking. It is the act of hijacking Large Language Models (LLMs) by stealing API (Application Programming Interface) keys, which act as digital credentials for accessing AI services. If obtained, API keys can allow cybercriminals to manipulate the LLMs to generate harmful content.
Storm-2139’s core activity involved leveraging stolen customer credentials, obtained from publicly available sources, to gain unauthorized access to AI platforms, modify the capabilities of these services, circumvent built-in safety measures, and then resell access to other malicious actors. They provided detailed instructions on how to generate illicit content, including non-consensual intimate images and sexually explicit material, often targeting celebrities.
Microsoft’s Digital Crimes Unit (DCU) initiated legal proceedings in December 2024, initially targeting ten unidentified individuals. Through subsequent investigations, they identified the key members of Storm-2139. The network operated through a structured model, with creators developing the malicious tools, providers modifying and distributing them, and users generating the abusive content.
“Storm-2139 is organized into three main categories: creators, providers, and users. Creators developed the illicit tools that enabled the abuse of AI-generated services. Providers then modified and supplied these tools to end users often with varying tiers of service and payment. Finally, users then used these tools to generate violating synthetic content,” Microsoft’s blog post revealed.
Visual Representation of Storm-2139 (Source: Microsoft)
The legal actions taken by Microsoft, including the seizure of a key website, resulted in significant disruption to the network. Members of the group reacted with alarm, engaging in online chatter, attempting to identify other members, and even resorting to doxing Microsoft’s legal counsel, highlighting the effectiveness of Microsoft’s strategy in dismantling the criminal operation.
Microsoft employed a multi-faceted legal strategy, initiating civil litigation to disrupt the network’s operations and pursuing criminal referrals to law enforcement agencies. This approach aimed to both halt the immediate threat and establish a deterrent against future AI misuse.
The company is also addressing the issue of AI misuse to generate harmful content, implementing stringent guardrails and developing new methods to protect users. It also advocates for modernizing criminal law to equip law enforcement with the necessary tools to combat AI misuse.
Security experts have highlighted the importance of stronger credential protection and continuous monitoring in preventing such attacks. Rom Carmel, Co-Founder and CEO at Apono told Hackread that companies who use AI and cloud tools for growth must limit access to sensitive data to reduce security risks.
“As organizations adopt AI tools to drive growth, they also expand their attack surface with applications holding sensitive data. To securely leverage AI and the cloud, access to sensitive systems should be restricted on a need-to-use basis, minimizing opportunities for malicious actors.“
Top/Featured Image via Pixabay/BrownMantis