open-interpreter icon indicating copy to clipboard operation
open-interpreter copied to clipboard

Local model does not work on Linux

Open Xu99999 opened this issue 2 years ago • 5 comments

Describe the bug

After the server deploys version 0.1.11, it crashes by pressing any key. How to run a multi-GPU machine with a specified GPU?

Reproduce

interpreter --local

Expected behavior

Should not exit, Linux is minimized and there is no graphical interface. How to use LM STUDIO?

Screenshots

No response

Open Interpreter version

0.1.11

Python version

3.10.0

Operating System name and version

redhat 7.9

Additional context

No response

Xu99999 avatar Nov 07 '23 05:11 Xu99999

截屏2023-11-07 13 02 50

Xu99999 avatar Nov 07 '23 05:11 Xu99999

Hey there, @Xu99999!

The latest version of Open Interpreter uses LM Studio for local inference, but LM Studio Linux support is currently in beta:

https://lmstudio.ai/

I believe you can request access to the beta, though.

You can request access to the beta or use an alternate method of running a local model and the underlying LiteLLM integration.

ericrallen avatar Nov 08 '23 10:11 ericrallen

When joining LM studios discord server you get access to the linux beta download.

Notnaton avatar Nov 09 '23 16:11 Notnaton

When joining LM studios discord server you get access to the linux beta download.

ths,but my Linux server is minimally installed and has no graphical interface. Does LM support command line interaction?

Xu99999 avatar Nov 13 '23 01:11 Xu99999

Im not sure, I don't think so. You can use other programs to serve LLMs. Llama.cpp Ollama Etc.

And then connect interpreter with the correct port: interpreter --api_base http://localhost:port/v1

Notnaton avatar Nov 13 '23 09:11 Notnaton

Closing this stale issue. Please create a new issue if the problem is not resolved or explained in the documentation. Thanks!

MikeBirdTech avatar Mar 18 '24 20:03 MikeBirdTech