Security
Headlines
HeadlinesLatestCVEs

Headline

Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Cybersecurity researchers have detailed a now-patch security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution. Tracked as CVE-2024-37032, the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version

The Hacker News
#sql#vulnerability#mac#windows#linux#intel#rce#auth#docker#The Hacker News

Artificial Intelligence / Cloud Security

Cybersecurity researchers have detailed a now-patch security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution.

Tracked as CVE-2024-37032, the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024.

Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices.

At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution.

The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation.

It specifically takes advantage of the API endpoint “/api/pull” – which is used to download a model from the official registry or from a private repository – to provide a malicious model manifest file that contains a path traversal payload in the digest field.

This issue could be abused not only to corrupt arbitrary files on the system, but also to obtain code execution remotely by overwriting a configuration file (“etc/ld.so.preload”) associated with the dynamic linker (“ld.so”) to include a rogue shared library and launch it every time prior to executing any program.

While the risk of remote code execution is reduced to a great extent in default Linux installations due to the fact that the API server binds to localhost, it’s not the case with docker deployments, where the API server is publicly exposed.

“This issue is extremely severe in Docker installations, as the server runs with `root` privileges and listens on `0.0.0.0` by default – which enables remote exploitation of this vulnerability,” security researcher Sagi Tzadik said.

Compounding matters further is the inherent lack of authentication associated with Ollama, thereby allowing an attacker to exploit a publicly-accessible server to steal or tamper with AI models, and compromise self-hosted AI inference servers.

This also requires that such services are secured using middleware like reverse proxies with authentication. Wiz said it identified over 1,000 Ollama exposed instances hosting numerous AI models without any protection.

“CVE-2024-37032 is an easy-to-exploit remote code execution that affects modern AI infrastructure,” Tzadik said. “Despite the codebase being relatively new and written in modern programming languages, classic vulnerabilities such as Path Traversal remain an issue.”

The development comes as AI security company Protect AI warned of over 60 security defects affecting various open-source AI/ML tools, including critical issues that could lead to information disclosure, access to restricted resources, privilege escalation, and complete system takeover.

The most severe of these vulnerabilities is CVE-2024-22476 (CVSS score 10.0), an SQL injection flaw in Intel Neural Compressor software that could allow attackers to download arbitrary files from the host system. It was addressed in version 2.5.0.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

Related news

Critical Flaws in Ollama AI Framework Could Enable DoS, Model Theft, and Poisoning

Cybersecurity researchers have disclosed six security flaws in the Ollama artificial intelligence (AI) framework that could be exploited by a malicious actor to perform various actions, including denial-of-service, model poisoning, and model theft. "Collectively, the vulnerabilities could allow an attacker to carry out a wide-range of malicious actions with a single HTTP request, including

GHSA-8hqg-whrw-pv92: Ollama does not validate the format of the digest (sha256 with 64 hex digits)

Ollama before 0.1.34 does not validate the format of the digest (sha256 with 64 hex digits) when getting the model path, and thus mishandles the TestGetBlobsPath test cases such as fewer than 64 hex digits, more than 64 hex digits, or an initial ../ substring.