Patrick Devine
Patrick Devine
@fabceolin Can you give some details? What was the model name you were trying to push? Do you have logs?
It's something we're looking at. No firm dates right now though.
@chaojihuihui is this using `ollama list` or `ollama ps`?
Please see the comments in #6564 . We're working on it, but it won't be released until we release the new backend engine.
I'll go ahead and close the issue.
Hey @smsohan thanks for the PR. The intended way to do this though is with the `OLLAMA_HOST` env variable. If we added a separate `PORT` variable this would get super...
Which model are you using, and when do you hit the error?
I'm going to go ahead and close the issue.
Hey @alexellis! For the endpoint solution you mentioned, you should be able to `curl localhost:11434` and it will only respond w/ a 200 if the server is up and running.