Headline
CVE-2023-29374
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
Something went wrong, but don’t fret — let’s give it another shot.
Related news
GHSA-fprp-p869-w6q2: LangChain vulnerable to code injection
In LangChain through 0.0.131, the `LLMMathChain` chain allows prompt injection attacks that can execute arbitrary code via the Python `exec()` method.