Security
Headlines
HeadlinesLatestCVEs

Headline

Hackers Exploiting OpenAI’s ChatGPT to Deploy Malware

By Waqas Hackers are using ChatGPT to develop powerful hacking tools and create new chatbots designed to mimic young girls to lure targets, claims Check Point. This is a post from HackRead.com Read the original post: Hackers Exploiting OpenAI’s ChatGPT to Deploy Malware

HackRead
#web#android#microsoft#intel#backdoor#auth

In one instance, a hacker shared an Android malware code written by ChatGPT, which could steal desired files, compress them, and leak them online.

Cybercriminals have a new trick up their sleeve involving the abuse of an artificially intelligent (AI) chatbot from OpenAI called ChatGPT. ChatGPT app has become popular since its launching at the end of November 2022, so naturally, scammers are eyeing it for exploitation.

ChatGPT Abused to Build Hacking Tools

According to a new report from Israeli security firm Check Point, hackers are using ChatGPT to develop powerful hacking tools and create new chatbots designed to mimic young girls to lure targets.

ChatGPT can also code malicious software that can monitor users’ keyboard strokes and create ransomware. For your information, ChatGPT has been developed by OpenAI as an interface for its LLM (Large Language Model).

However, cybercriminals have somehow figured out a way to make it a threat to the cyber world since its code generation capability can easily help threat actors launch cyberattacks.

On the other hand, Hold Security’s founder Alex Holden stated that he has observed dating scammers exploiting ChatGPT to create convincing personas. Scammers are creating female personas to impersonate girls to gain trust and have lengthier conversations with their targets.

Potential Dangers

Check Point Research’s researchers explained in their blog post that an attacker could create an authentic-looking spear-phishing email to run a reverse shell that can accept commands in English.

Many underground hacking forums have posted about incidents where cybercriminals used OpenAI to develop malicious tools, even those having no development skills. In one of the posts that Check Point reviewed, a hacker shared an Android malware code written by ChatGPT, which could steal desired files, compress them, and leak them on the web.

One user shared how they abused ChatGPT by using it to create code-up features of a marketplace on the Dark Web, like Silk Road and Alphabay.

Another tool was posted on the forum that could be used to install a backdoor on a device and upload more malware onto the compromised computer. Similarly, a user-shared Python code capable of file encryption using the OpenAI app.

Posting related to chatGPT on a popular hacker forum

Check Point researchers observed that this was the first tool this user had created. Such codes could be used for malicious purposes and modified to encrypt a device without involving user interaction, just like with ransomware.

Moreover, scammers can also use ChatGPT to build bots and sites to trick users into sharing their information and launch highly targeted social engineering scams and phishing campaigns.

OpenAI is yet to respond to these findings.

  1. AI-based Model to Predict Extreme Wildfire Danger
  2. Website uses AI to create utterly realistic human faces
  3. Microsoft patent reveals chatbot to talk to dead people
  4. This AI Can Generate Unique and Free Bored Ape NFTs
  5. AI-Powered Smart Glasses Give Deafs Power of Speech
  6. ‘Fawkes’ privacy tool blocks images from facial recognition

I am a UK-based cybersecurity journalist with a passion for covering the latest happenings in cyber security and tech world. I am also into gaming, reading and investigative journalism

HackRead: Latest News

Hackers Release Second Batch of Stolen Cisco Data