Headline
GHSA-7735-w2jp-gvg6: Vanna prompt injection code execution
The Vanna library uses a prompt function to present the user with visualized results, it is possible to alter the prompt using prompt injection and run arbitrary Python code instead of the intended visualization code. Specifically - allowing external input to the library’s “ask” method with “visualize” set to True (default behavior) leads to remote code execution.
Vanna prompt injection code execution
High severity GitHub Reviewed Published May 31, 2024 to the GitHub Advisory Database • Updated Jun 4, 2024
Related news
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary