Add local LLM support
https://ollama.com/
+1
Did this get added?
Did it get added
Not yet
Given the actual version is expected to use OPEN_AI_API_KEY and OPEN_AI_MODEL, you could expose also OPEN_AI_BASE_URL enviromental variable.
In this way, users could use any OpenAI compatible endpoint to run these models. This would open up compatiblity with backends such as Ollama, Text Generation WebUI, LM Studio, etc.
So, if a user want to use Ollama, they should just type the base_url: http://127.0.0.1:11434, with any string as API Key, and the chosen model.
Note that base_url parameter is supported natively by OpenAI python library, so it probably will not require any extra configuration down the road: https://github.com/openai/openai-python/blob/0470d1baa8ef1f64d0116f3f47683d5bf622cbef/src/openai/_base_client.py#L328
you could expose also OPEN_AI_BASE_URL enviromental variable.
@elsatch this is pretty much what @nilbot is working on in #41