Bad environment variable in README
The instructions for using a self-hosted LLM in the README file say that you need to set the OPENAI_API_BASE variable. This should be OPENAI_BASE_URL to work properly.
Hey, which holmes version are you running and with which LLM model?
In the latest holmes version use LiteLLM under the hood, which uses OPENAI_API_BASE according to the docs.
I used the latest brew installation. holmes version gives me: HEAD -> master-fd086e5. I used the Llama3.1 model from Ollama.
Thanks, you're definitely on the latest version using LiteLLM.
What was the exact --model flag that you passed holmes?
When I set the environment variable with export OPENAI_API_BASE=<url-here>
and I then call holmes ask --model=llama3.1:8b-instruct-q8_0 "what pods are unhealthy and why?"
I get the following error: NotFoundError: Error code: 404 - {'error': {'message': 'The model `llama3.1:8b-instruct-q8_0` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
When I go with --model=openai/llama3.1:8b-instruct-q8_0 I get: BadRequestError: Error code: 400 - {'error': {'message': 'invalid model ID', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Got it, thanks. And to clarify, this works if you go with OPENAI_BASE_URL?
Yes, correct. If I use OPENAI_BASE_URL it works.
@AIUser2324, do either of the updated instructions for Ollama here work for you? https://github.com/robusta-dev/holmesgpt/pull/133/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5
On my side, Holmes is able to connect in both cases, but I'm not getting good results. Perhaps that is because I'm not using the instruct model?
In any event, are you able to get decent results with either:
- The steps you mentioned in your original post
- The new instructions (linked in the PR)