Patrick Devine
Patrick Devine
@jakobhoeg Did you manage to get this working? @mxyng 's point is correct; maybe you were running it on a different port or a different host?
Can you try again? We had a brief outage yesterday where non-authenticated clients weren't pulling correctly.
Hey guys, sorry for the slow response. If you are moving between Windows and Linux, the problem is the filenames of the blobs in linux look like `sha256:` whereas NTFS...
It's here peeps. `ollama run llama3`. We had some problems with the vocabulary earlier, but it should be working now. The other quantizations are coming, as are the `text` and...
@artem-zinnatullin and @vk2r can you re-pull the image you're using? I'm wondering if you pulled before we fixed the problem earlier today.
@DennisKo can you post the logs from the server for the `POST /api/pull` request? Specifically there should be something that looks like: `downloading in X Y MB part(s)` I just...
This is actually expected. The API server cleans up all of the partially downloaded images every time it restarts. You should be able to turn this off by setting `OLLAMA_NOPRUNE=1`...
The models are stored here: https://github.com/jmorganca/ollama/blob/main/docs/faq.md#where-are-models-stored To migrate them, you can actually just copy the entire models directory to a different place. The key here is to have the correct...
I haven't been able to reproduce this in the current version. Are you still seeing it? What platform are you using?
Hey @beliboba , you can already do this right now. Go to `https://ollama.ai/signup` and create an account. You can then go to `https://ollama.ai/settings/keys` when you're signed in and upload your...