Headline
ChatGPT’s False Information Generation Enables Code Malware
By Habiba Rashid Researchers have warned that cyber criminals may exploit ChatGPT’s AI Package Hallucination to spread malicious code, including malware infection. This is a post from HackRead.com Read the original post: ChatGPT’s False Information Generation Enables Code Malware
The issue allows attackers to exploit ChatGPT’s tendency to generate false information, particularly in the form of nonexistent code packages.
In a recent study, cybersecurity researchers have discovered a concerning vulnerability in ChatGPT, a popular generative artificial intelligence (AI) platform. The flaw/issue allows attackers to exploit ChatGPT’s tendency to generate false information, particularly in the form of nonexistent code packages.
By utilizing what the researchers term “AI package hallucinations,” threat actors can create and distribute malicious code packages that developers may inadvertently download and integrate into their legitimate applications and code repositories.
The researchers, from Vulcan Cyber’s Voyager18 research team, detailed their findings in a blog post published on June 6th, 2023. They highlighted the risks posed to the software supply chain, as malicious code and Trojans could easily slip into widely used applications and repositories such as npm, PyPI, GitHub, and others.
The root cause of the problem lies in ChatGPT’s reliance on outdated and potentially inaccurate training data. As a large language model (LLM), ChatGPT is capable of generating plausible but fictional information. This phenomenon, known as AI hallucination, occurs when the model extrapolates beyond its training data, leading to responses that seem plausible but may not be accurate.
The attack technique involves posing coding-related questions to ChatGPT, which then provides recommendations for code packages. Attackers exploit the platform’s tendency to suggest unpublished or nonexistent packages. They can then create their malicious versions of these packages, waiting for ChatGPT to recommend them to unsuspecting developers. Consequently, developers may unknowingly install these malicious packages, thereby introducing significant risks into their software supply chain.
To demonstrate the severity of the issue, the researchers conducted a proof-of-concept simulation using ChatGPT 3.5. They engaged in a conversation with the platform, asking for a package to solve a coding problem. ChatGPT responded with multiple package recommendations, some of which were nonexistent.
The researchers then proceeded to publish their malicious package, replacing the nonexistent recommendation. Subsequently, when another user posed a similar question, ChatGPT suggested the newly created malicious package, leading to its installation and potential harm.
Watch the demonstration video shared by Vulcan Cyber
The research team also provided recommendations on how developers can identify and mitigate these risks. They advised validating the packages they download by scrutinizing factors such as creation dates, download numbers, comments, stars, and any associated notes.
Developers are urged to exercise caution, especially when recommendations come from AI platforms rather than trusted sources within the community.
This discovery adds to a series of security risks associated with ChatGPT. As the platform gained widespread adoption, threat actors seized the opportunity to exploit its capabilities for various malicious activities. From malware attacks, phishing campaigns and credential theft; the rise of generative AI platforms like ChatGPT has attracted both legitimate users and malicious actors.
RELATED ARTICLES
- ARMO integrates ChatGPT to secure Kubernetes
- OpenAI ChatGPT Bug Bounty – Earn $200 to $20k
- Fake ChatGPT Extension Hijacks Facebook Accounts
- DarkBERT AI: Bringing Cybersecurity to the Dark Web
- Polymorphic Blackmamba malware created with ChatGPT
I’m a student and cybersecurity writer. On a random Sunday, I am likely to be figuring out life and reading Kafka.