Local model does not work on Linux
Describe the bug
After the server deploys version 0.1.11, it crashes by pressing any key. How to run a multi-GPU machine with a specified GPU?
Reproduce
interpreter --local
Expected behavior
Should not exit, Linux is minimized and there is no graphical interface. How to use LM STUDIO?
Screenshots
No response
Open Interpreter version
0.1.11
Python version
3.10.0
Operating System name and version
redhat 7.9
Additional context
No response
Hey there, @Xu99999!
The latest version of Open Interpreter uses LM Studio for local inference, but LM Studio Linux support is currently in beta:
https://lmstudio.ai/
I believe you can request access to the beta, though.
You can request access to the beta or use an alternate method of running a local model and the underlying LiteLLM integration.
When joining LM studios discord server you get access to the linux beta download.
When joining LM studios discord server you get access to the linux beta download.
ths,but my Linux server is minimally installed and has no graphical interface. Does LM support command line interaction?
Im not sure, I don't think so. You can use other programs to serve LLMs. Llama.cpp Ollama Etc.
And then connect interpreter with the correct port:
interpreter --api_base http://localhost:port/v1
Closing this stale issue. Please create a new issue if the problem is not resolved or explained in the documentation. Thanks!