Security
Headlines
HeadlinesLatestCVEs

Headline

New EmailGPT Flaw Puts User Data at Risk: Remove the Extension NOW

Synopsys warns of a new prompt injection hack involving a security vulnerability in EmailGPT, a popular AI email…

HackRead
#vulnerability#web#google#dos#intel#chrome

Synopsys warns of a new prompt injection hack involving a security vulnerability in EmailGPT, a popular AI email assistant. Learn how a security flaw in EmailGPT could allow hackers to steal your data or cause financial losses. Discover what you can do to protect yourself and why security is crucial in AI-powered tools.

A security vulnerability discovered by Synopsys’ Cybersecurity Research Center (CyRC) has exposed a potential pitfall for users of EmailGPT, a popular AI-powered email writing assistant and Google Chrome extension.

This vulnerability, tracked as CVE-2024-5184 and dubbed prompt injection, could allow malicious threat actors to manipulate the service and potentially compromise sensitive information.

****What is EmailGPT?****

EmailGPT leverages OpenAI’s GPT models to assist users in crafting emails within Gmail. Users can receive AI-generated suggestions for composing emails more efficiently by providing prompts and context. However, the recent discovery highlights a potential security flaw in this functionality.

****The Prompt Injection Threat****

EmailGPT uses an API service that allows a malicious user to inject a direct prompt and take over the service logic. Attackers can exploit this by forcing the AI service to leak standard system prompts or execute unwanted prompts.

When submitting a malicious prompt, the system provides the requested data, making it vulnerable to exploitation by anyone having access to the service. The vulnerability has a CVSS score of 6.5, indicating medium severity.

****Possible Dangers****

A malicious user could create a prompt that injects unintended functionality, potentially leading to data extraction, spam campaigns using compromised accounts, and generating misleading email content, contributing to disinformation campaigns. Apart from causing intellectual property leakage, this vulnerability can lead to denial-of-service, and financial loss when exploited.

As per Synopsys’ blog post, researchers reached out to the developers of EmailGPT before publicizing the details, adhering to their responsible disclosure policy, but did not receive a response yet. Researchers recommend immediately removing EmailGPT from any installations as no workaround is currently available.

Users should stay informed about updates and patches to ensure continued secure use of the service. As AI technology continues to evolve, vigilance and robust security practices will be crucial to mitigate potential risks.

“The CyRC reached out to the developers but has not received a response within the 90-day timeline dictated by our responsible disclosure policy. The CyRC recommends removing the applications from networks immediately.”

Synopsys

Patrick Harr, CEO at Pleasanton, Calif.- based SlashNext Email Security commented on the issue and emphasised the importance of strong governance and security measures for AI models to prevent vulnerabilities and exploits, stating that businesses should require proof of security from AI model suppliers before integrating them into their operations.

This latest research by the Synopsys Cybersecurity Research Center further highlights the importance of strong governance on building, securing and red-teaming the AI models themselves. At its core, AI is code that can be exploited like any other code and the same processes need to be put in place to secure that code to prevent these unwanted prompt exploits, Patrik said.

Security and governance of the AI models are paramount as part of the culture and hygiene of companies building and proving the AI models either through applications or APIs. Customers particularly businesses need to demand proof of how the suppliers of these models are securing themselves including data access BEFORE they incorporate them into their business, he added.

****REALTED TOPICS** **

  1. New LLMjacking Attack Lets Hackers Hijack AI Models
  2. Flaws Exposed Hugging Face to AI Supply Chain Attacks
  3. AI Generated Fake Obituary Websites Target Grieving Users
  4. AI-Powered Scams, Human Trafficking Fuel Global Crime Surge
  5. ShadowRay Campaign Targets Ray AI Framework in Global Attack

HackRead: Latest News

Malicious Node on ComfyUI Steals Data from Crypto, Browser Users