Headline
GHSA-fprp-p869-w6q2: LangChain vulnerable to code injection
In LangChain through 0.0.131, the LLMMathChain
chain allows prompt injection attacks that can execute arbitrary code via the Python exec()
method.
LangChain vulnerable to code injection
High severity GitHub Reviewed Published Apr 5, 2023 to the GitHub Advisory Database • Updated Apr 5, 2023
Related news
CVE-2023-29374
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.