Headline
New LLMjacking Attack Lets Hackers Hijack AI Models for Profit
By Deeba Ahmed Researchers uncover a novel cyberattack scheme called “LLMjacking” exploiting stolen cloud credentials to hijack powerful AI models. This article explores the implications of attackers leveraging large language models (LLMs) for malicious purposes and offers security recommendations for the cloud and AI communities. This is a post from HackRead.com Read the original post: New LLMjacking Attack Lets Hackers Hijack AI Models for Profit
The world of artificial intelligence (AI) is rapidly evolving, offering incredible potential for innovation and progress. However, with great power comes great risk, and a recent discovery by the Sysdig Threat Research Team (TRT) exposes one such risk.
Reportedly, researchers have discovered a novel cyberattack scheme, dubbed, LLMjacking, in which, threat actors gain access to the cloud environment and attempt to access local Large Language Models (LLMs) hosted by cloud providers.
In the blog post, Sysdig’s security researcher Alessandro Brucato explained that cybercriminals are targeting systems running outdated software using stolen cloud credentials, most likely obtained from compromised cloud accounts, to infiltrate systems running the LLMs to unlock the treasure trove of their capabilities.
According to researchers, before the release of their research, attackers had already accessed LLM models across ten different AI services, including Anthropic, AWS Bedrock, Google Cloud Vertex AI, Mistral, and OpenAI.
In one case, a local Claude (v2/v3) LLM model from Anthropic was targeted, where attackers breached a vulnerable Laravel Framework system, executing the intrusion by gaining access to Amazon Web Services (AWS) credentials through exploiting a vulnerability, CVE-2021-3129. An open-source Python script was used to access compromised accounts.
Researchers found that attackers are tampering with logging settings in compromised systems, indicating a deliberate attempt to evade detection while using stolen LLM access, highlighting the growing sophistication of cybercriminals.
“If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim,” Brucato noted.
But what’s the motive?
Unlike traditional hacking focused on stealing data or disrupting operations, LLMjacking seems driven by profit. However, there’s a twist- researchers believe the attackers aren’t after the data stored within the LLMs themselves. Instead, they’re aiming to sell access to the AI models’ capabilities to other criminals.
That’s because no legitimate LLM queries were run during the verification phase, only determining the credentials’ capabilities and quotas. The keychecker integrates with oai-reverse-proxy, a reverse proxy server for LLM APIs, suggesting that attackers are providing access to compromised accounts without exposing the underlying credentials.
This discovery highlights the need for a multi-pronged approach to securing AI. Sysdig recommends implementing robust vulnerability and secrets management practices, along with Cloud Security Posture Management or Cloud Infrastructure Entitlement Management solutions, to minimize permissions and prevent unauthorized access.
- FraudGPT Chatbot Emerges for AI-Driven Cyber Crime
- AI Generated Fake Obituary Websites Target Grieving Users
- AI-Powered Scams Fuel Global Cybercrime Surge: INTERPOL
- WormGPT – Malicious ChatGPT Alternative Empowering Crooks
- Researchers Test Zero-click Worms that Exploit Generative AI Apps
Related news
By Deeba Ahmed Veriti Research exposes surge in Androxgh0st attacks, exploiting CVEs and building botnets for credential theft. Patch systems, monitor for web shells, and use behavioral analysis to protect yourself. This is a post from HackRead.com Read the original post: Androxgh0st Malware Compromises Servers Worldwide for Botnet Attack
Cybersecurity company Trend Micro has released patches and hotfixes to address a critical security flaw in Apex One and Worry-Free Business Security solutions for Windows that has been actively exploited in real-world attacks. Tracked as CVE-2023-41179 (CVSS score: 9.1), it relates to a third-party antivirus uninstaller module that's bundled along with the software. The complete list of impacted