Headline
7,000 Exposed Ollama APIs Leave DeepSeek AI Models Wide Open to Attack
UpGuard discovers exposed Ollama APIs revealing DeepSeek model adoption globally. See where these AI models are running and the security risks involved.
Cybersecurity researchers at third-party risk management firm UpGuard have identified a vulnerability surrounding exposed Ollama APIs, which provide access to running AI models. These exposed APIs not only pose security risks for model owners but also offer a unique opportunity to gauge the adoption rate and geographic distribution of specific AI models, such as DeepSeek.
Ollama is an AI model framework that simplifies interaction with models by providing a user-friendly interface for selecting and downloading models. However, as per UpGuard’s research, shared exclusively with Hackread.com ahead of its release, the API can be exposed to the public internet; its functions to push, pull, and delete models can put data at risk and unauthenticated users can also bombard models with requests, potentially causing costs for cloud computing resource owners. Existing vulnerabilities within Ollama could also be exploited.
According to UpGuard’s research, there is already evidence of the vulnerability being exploited, with targeted IPs being tampered with.
A screenshot provided by UpGuard to Hackread.com shows an Ollama instance where the models appear to have been deleted.
Researchers expressed concern that Ollama APIs are likely used by hobbyists on home or small business internet connections or university networks and these systems can be easily compromised and incorporated into botnets for future attacks.
UpGuard also analyzed the distribution of DeepSeek models across different parameter sizes running on exposed Ollama APIs. The 14b and 7b options (representing models with 14 billion and 7 billion parameters) were the most commonly observed among the exposed DeepSeek models, suggesting that many users are opting for these mid-range models.
Approximately seven thousand IP addresses currently expose Ollama APIs, indicating an over 70% rise in three months, since in its survey Laava found four thousand IP addresses in November 2024. Out of these exposed IPs, 700 are running some version of DeepSeek. Specifically, 334 are utilizing models from the deepseek-v2 family, while 434 are running the newer deepseek-r1 model.
Geographically, the highest concentration of IPs running DeepSeek models is located in China (171 IP addresses, representing 24.4%), followed by the U.S. (140 IP addresses, or 20%), and Germany (90 IP addresses, or 12.9%).
Furthermore, the distribution of DeepSeek models within the US by Internet Service Provider (ISP) analysis reveals a diverse range of providers, from large entities like Google LLC and AT&T Enterprises LLC to smaller providers and universities. Also, researchers found the highest concentration of DeepSeek instances in China, the US, and Germany, with other countries contributing smaller percentages.
“There are really two things that surprised us in this research: how fast exposed Ollama APIs are growing, and how quickly DeepSeek has spread. Either of those could have big consequences in the future.”
Greg Pollock, Director of Research and Insights – Upguard
It is worth noting that DeepSeek has been restricted by entities like the US Navy, the state of Texas, NASA, and Italy over concerns of possible data leakage to the Chinese government. While these concerns may not apply to open-source models, most users are unlikely to scrutinize the code, UpGuard researchers noted along with identifying potential risks within the models themselves.
Therefore, to prevent AI data leakage, it is essential to audit one’s attack surface for exposed Ollama APIs and remain aware of which models or AI products can be crucial in third-party risk management.