Where does ollama-python saves the pulled model?
Where does ollama-python saves the pulled model using ollama.pull('llava')? I tried to set the environment variable OLLAMA_MODELS similar to Ollama-cli. But it is not using the path provided in OLLAMA_MODELS.
I tried to set the environment variable
common mistake is not have export OLLAMA_MODELS=<value>ed the variable when you set it, I'd double check that.
The model dir on macOS is ~/.ollama/models, should be similar on Linux systems, and should be %USERPROFILE% on Windows (I don't dev on Windows, cannot confirm for you)
On Mac, the model files are stored in chunks in ~/.ollama/models/blobs but they are sha256--prefixed directory names that are not readable. To match which one matches your model you need to first check the manifest which looks like this for today's llama3:
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"digest": "sha256:3f8eb4da87fa7a3c9da615036b0dc418d31fef2a30b115ff33562588b32c691d",
"size": 485
},
"layers": [
{
"mediaType": "application/vnd.ollama.image.model",
"digest": "sha256:6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa",
"size": 4661211424
},
{
"mediaType": "application/vnd.ollama.image.license",
"digest": "sha256:4fa551d4f938f68b8c1e6afa9d28befb70e3f33f75d0753248d530364aeea40f",
"size": 12403
},
{
"mediaType": "application/vnd.ollama.image.template",
"digest": "sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f",
"size": 254
},
{
"mediaType": "application/vnd.ollama.image.params",
"digest": "sha256:577073ffcc6ce95b9981eacc77d1039568639e5638e83044994560d9ef82ce1b",
"size": 110
}
]
}
note the layers items where mediaType is application/vnd.ollama.image.model, the digest is sha256:6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa. You convert that : to - and you get the blobs/ subdirectory name.
To piece this altogether, if you have jq (or similar json parser installed), you can confirm this is a big model file previously downloaded:
# this assumes the first `layers` item represents `image.model`
du -h ~/.ollama/models/blobs/$(jq -r '.layers[0].digest' ~/.ollama/models/manifests/registry.ollama.ai/library/llama3/latest | sed s/:/-/)
4.3G /Users/yourusername/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53868f220ff6e389f6f8ef87a01d77c96807de94ca2aa
this matches the reported model size:
replace llama3/latest with your model's name and tag to get the same outcome
if you successfully set OLLAMA_MODELS to some other path, and don't want to re-download the files, I think you can just manually mv the directory to your new preferred path
hope this helps