sim icon indicating copy to clipboard operation
sim copied to clipboard

[REQUEST] Allow connecting to local LLM hosted via llama.cpp or LM Studio

Open vermi opened this issue 3 months ago • 2 comments

Is your feature request related to a problem? Please describe. I cannot get the ollama docker to build at all as it consistently claims to be out of memory. Being able to connect to a local llama.cpp, ollama or LM Studio server would be useful.

Describe the solution you'd like a "Local LLM" connector

vermi avatar Oct 15 '25 16:10 vermi

Oh it'd be awesome to be able to use LM Studio instead of Ollama !

CedricEugeni avatar Oct 18 '25 09:10 CedricEugeni

The Azure provider does allow you to override the API URL - however the list of models is static. Probably not a problem for llama.cpp since I think it ignores the model parameter.... but might be an issue for LM Studio. To point the Ollama provider at a different Ollama instance, not the one from the example docker flie, you just need to change an environment variable.

Jakdaw avatar Nov 12 '25 10:11 Jakdaw