Headline
Russian Hackers Eager to Bypass OpenAI’s Restrictions to Abuse ChatGPT
By Waqas One of the threat actors inquired about the ideal way to use a stolen payment card to purchase an upgraded user on OpenAI. This is a post from HackRead.com Read the original post: Russian Hackers Eager to Bypass OpenAI’s Restrictions to Abuse ChatGPT
Russian hacker forums have been flooded with queries wondering how hackers can bypass OpenAI’s restrictions to exploit ChatGPT for spreading malware and other day-to-day criminal operations.
Hackread.com earlier reported how hackers are abusing ChatGPT to deploy malware. As per the latest news, Russian hackers are now trying to bypass OpenAI restrictions to exploit ChatGPT for malicious purposes.
According to Check Point Research (CPR), Russian hackers are trying to bypass OpenAI‘s restrictions for the malicious use of ChatGPT. For your information, ChatGPT is an OpenAI-owned chatbot launched in November 2022.
An article published on a Russian hacker forum (Image shared with Hackread.com by Check Point)
It is designed on the GPT-3 family of large language models and has been fine-tuned with reinforcement and supervised learning techniques. Its core function is mimicking human conversations, but the chatbot is highly versatile and features voice improvisation skills.
Such as it can debug or write computer programs for composing music, fairy tales, teleplays, and even essays. Moreover, it can answer test questions, write poetry and lyrics, and emulate a Linux system.
Russian Hackers and ChatGPT
Given its versatility, Russian hackers are looking to bypass OpenAI’s API restrictions to access this chatbot. Check Point researchers identified discussions on underground hacking forums where threat actors shared details on how to circumvent IPs, phone numbers, and payment card restrictions, which are essential for using this platform from Russia.
Threat Intelligence Group Manager at Check Point Software Technologies, Sergey Shykevich, said that it isn’t challenging to bypass OpenAI’s restrictions for specific countries as it is easier there to access ChatGPT.
“Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes. We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations.”
CPR has shared screenshots of these discussions. In one of the threads, a threat actor inquired about the ideal way to use a stolen payment card to purchase an upgraded user on OpenAI.
(Image shared with Hackread.com by Check Point)
In another thread, a cybercriminal is asking about bypassing the platform’s Geo Controls. In the third screenshot.
The third screenshot shows a tutorial shared on the hacking forums in Russian semi-legal online SMS services and how to use them to register ChatGPT.
(Image shared with Hackread.com by Check Point)
“Cybercriminals are growing more and more interested in ChatGPT because the AI technology behind it can make a hacker more cost-efficient,” Shykevich added.
What if cyber criminals use AI?
Artificial Intelligence (AI) has been a boon to cybersecurity companies and countless businesses, allowing them to automate processes and optimize their performance. But what if this powerful technology was used for nefarious purposes? Recent reports suggest that advances in AI are making it easier for cybercriminals to launch attacks on networks and steal valuable data.
Cybercriminals have long relied on automation tools to hide their activities from detection, but with the help of AI, they can take these tactics one step further. For example, attackers can use deep learning algorithms and natural language processing techniques to create more convincing phishing emails that can be used as part of larger schemes.
In addition, AI-based malware is becoming increasingly sophisticated, enabling it to evade traditional security measures like anti-virus software and firewalls.
- AI-based Model to Predict Extreme Wildfire Danger
- Website uses AI to create utterly realistic human faces
- Microsoft patent reveals chatbot to talk to dead people
- This AI Can Generate Unique and Free Bored Ape NFTs
- AI-Powered Smart Glasses Give Deafs Power of Speech
- ‘Fawkes’ privacy tool blocks images from facial recognition
- Retired Software Exploited To Target Power Grids, Microsoft
- Spyware Vendor Exploited Chrome, Firefox, Windows 0-days
I am a UK-based cybersecurity journalist with a passion for covering the latest happenings in cyber security and tech world. I am also into gaming, reading and investigative journalism