Critical Vulnerabilities in Ollama AI Framework: Risks of DoS, Model Theft, and Poisoning Attacks
The Ollama artificial intelligence (AI) framework, popular for deploying large language models (LLMs) locally on devices like Windows, Linux, and macOS, has recently been found to contain several critical security vulnerabilities. Cybersecurity researchers at Oligo Security disclosed these flaws, which could allow malicious actors to execute various attacks, including denial-of-service (DoS), model poisoning, and model theft—all with a single HTTP request.
The Vulnerabilities Explained
Ollama is an open-source tool designed to help users deploy and operate LLMs on local systems. Its GitHub repository is highly active, with over 7,600 forks to date, reflecting its widespread use. However, Oligo Security's findings suggest that Ollama’s popularity could also pose a significant security risk, especially with thousands of installations potentially exposed on the public internet.
Here’s a breakdown of the six vulnerabilities identified, along with their CVE identifiers, severity scores, and potential impacts:
CVE-2024-39719 (CVSS score: 7.5)
This flaw is tied to the/api/create
endpoint, allowing an attacker to determine the presence of specific files on the server.- Status: Fixed in version 0.1.47
CVE-2024-39720 (CVSS score: 8.2)
An out-of-bounds read vulnerability in the/api/create
endpoint could cause the application to crash, leading to a DoS condition.- Status: Fixed in version 0.1.46
CVE-2024-39721 (CVSS score: 7.5)
This vulnerability enables resource exhaustion by invoking the/api/create
endpoint repeatedly with/dev/random
as the input file, potentially leading to a DoS state.- Status: Fixed in version 0.1.34
CVE-2024-39722 (CVSS score: 7.5)
A path traversal vulnerability in the/api/push
endpoint allows attackers to view files and directory structures on the server, compromising sensitive information.- Status: Fixed in version 0.1.46
Model Poisoning Vulnerability (Unpatched)
An unpatched vulnerability in the/api/pull
endpoint can lead to model poisoning by allowing models to be pulled from untrusted sources. This can inject malicious data into the model, potentially impacting its performance and integrity.Model Theft Vulnerability (Unpatched)
Another unpatched flaw on the/api/push
endpoint could allow unauthorized users to push a model to an untrusted target, leading to possible model theft.
Mitigations and Workarounds for Unpatched Vulnerabilities
While several of these issues have been addressed in recent updates, the two model-related vulnerabilities remain unresolved. Ollama’s maintainers have recommended that users apply specific network controls, such as:
- Endpoint Filtering: Restrict which endpoints are exposed to the internet using a proxy or web application firewall (WAF).
- Port Configuration: Ensure only trusted endpoints are accessible, as Ollama’s default deployment currently leaves all endpoints open without strict documentation or separation.
However, Avi Lumelsky from Oligo Security cautions that many users might be unaware of these default settings, exposing them to increased risk.
Global Exposure and Implications
According to Oligo Security, there are 9,831 unique internet-facing instances of Ollama worldwide, with most installations located in China, the U.S., Germany, South Korea, Taiwan, France, the U.K., India, Singapore, and Hong Kong. Oligo reports that one in four publicly accessible servers is vulnerable to the identified flaws, underscoring the potential scale of the threat.
Ongoing Risks and Recent Precedents
These vulnerabilities come only four months after cloud security firm Wiz discovered another serious flaw in Ollama (CVE-2024-37032), which could have allowed for remote code execution (RCE). Lumelsky emphasizes that exposing Ollama’s capabilities without secure authorization is comparable to exposing the Docker socket to the public internet, providing a dangerous level of access to an attacker who could upload files and abuse model pull/push functions.
No comments:
Post a Comment