[Bug]: OpenDevin doesn' run with ollama
Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
Describe the bug
Cannot run OpenDevin with ollama.
Current Version
ghcr.io/opendevin/opendevin:main
Installation and Configuration
I followed the instruction outlined here https://opendevin.github.io/OpenDevin/modules/usage/llms/localLLMs.
I verified that ollama is reachable from inside the docker container with the curl command.
I tried also with the container version 0.5 but same error
This is how I specify the model
ollama/llama3:8b
as listed by ollama
ollama ls
NAME ID SIZE MODIFIED
llama2:latest 78e26419b446 3.8 GB 6 weeks ago
llama3:8b a6990ed6be41 4.7 GB 3 weeks ago
mistral:latest 61e88e884507 4.1 GB 6 weeks ago
### Model and Agent
_No response_
### Reproduction Steps
1 export the WORKDPACE_BASE env variable
2 run it with docker run -it --pull=always --add-host host.docker.internal:host-gateway -e SANDBOX_USER_ID=$(id -u) -e LLM_API_KEY="ollama" -e LLM_BASE_URL="http://host.docker.internal:11434" -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 ghcr.io/opendevin/opendevin
### Logs, Errors, Screenshots, and Additional Context
This is the output from the container
latest: Pulling from opendevin/opendevin Digest: sha256:881f4034588726f037f1b87c7224c426d9f496f4d8843ee9f54ff8e97c046202 Status: Image is up to date for ghcr.io/opendevin/opendevin:latest INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit) INFO: 172.17.0.1:41560 - "GET /index.html HTTP/1.1" 304 Not Modified INFO: ('172.17.0.1', 41566) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJiNGRjZDY0ZS0wNTM3LTQzOTUtYmVhNy1kNGU1NmMwOTI4NTQifQ.vmBldK0V3KZFXFQUA_v9qfttCO12HX5bWyz2ssl0xek" [accepted] 20:13:51 - opendevin:ERROR: auth.py:31 - Invalid token 20:13:51 - opendevin:ERROR: listen.py:38 - Failed to decode token INFO: connection open INFO: 172.17.0.1:41570 - "GET /api/refresh-files HTTP/1.1" 200 OK INFO: 172.17.0.1:41570 - "GET /api/litellm-models HTTP/1.1" 200 OK 20:13:51 - opendevin:ERROR: auth.py:31 - Invalid token INFO: 172.17.0.1:41574 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: 172.17.0.1:41570 - "GET /api/agents HTTP/1.1" 200 O
This is what is visible in the GUI:

Could you run in incognito window?
What do you mean? Can you explain ?
20:13:51 - opendevin:ERROR: auth.py:31 - Invalid token 20:13:51 - opendevin:ERROR: listen.py:38 - Failed to decode token
this is the error message.
https://support.google.com/chrome/answer/95464?hl=EN&co=GENIE.Platform%3DDesktop
But I am not running any browser . I just started the docker container
INFO: 172.17.0.1:41560 - "GET /index.html HTTP/1.1" 304 Not Modified
From this IP, the frontend is opened.
I am using llama3 8b model inside the docker container using the docker command given on their site. Remember to choose the ollama/<model_name> and Api Key as ollama even when you first load the web app or by clicking the settings as shown in the below screenshots.
export WORKSPACE_BASE=$(pwd)/workspace
docker run \
-it \
--pull=always \
--add-host host.docker.internal:host-gateway \
-e SANDBOX_USER_ID=$(id -u) \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://host.docker.internal:11434" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
ghcr.io/opendevin/opendevin:0.5
Please refer to below screenshots.
Although, my problem is different. I had tried this with GPT-4o before, and it was able to actually perform the linux commands and write actual code inside the workspace directory. But with llama3 running on ollama, It's only suggesting the code and the commands. Not actually doing anything. Doesn't CodeAct Agent work with llama3? Is it something to do with function calliing? Anyway I'll report this as bug and see how it goes.
"Let us" - you only included yourself in this process. It's all about the prompts for Local LLMs.
This doesn't work for me either. My local ollama models do not show in the dropdown. According to ollama logs opendevin does not even query the ollama server:
#!/bin/bash
OPENDEVIN_WORKSPACE=$(pwd)/workspace
docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e PERSIST_SANDBOX="true" \
-e SSH_PASSWORD="make something up here" \
-e WORKSPACE_MOUNT_PATH=$OPENDEVIN_WORKSPACE \
-e TRANSFORMERS_CACHE=/opt/hf_cache \
-e HF_HOME=/opt/hf_cache \
-e LLM_API_KEY="ollama" \
-e LLM_MODEL"ollama/codestral:22b-test" \
-e LLM_BASE_URL="http://host.docker.internal:11434" \
-v $OPENDEVIN_WORKSPACE:/opt/workspace_base \
-v hf_cache:/opt/hf_cache \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name opendevin-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/opendevin/opendevin:main
@bendavis78
Are you sure -e LLM_MODEL"ollama/codestral:22b-test" \ is a correct model name ?
Then you can follow this instruction to check connection is normal.
I was able to get it working by setting the model name. Unfortunately the ollama models do not show in the drop-down.
I was able to get it working by setting the model name. Unfortunately the ollama models do not show in the drop-down.
There is a separate issue #2432 going on. I think we can close this one. @SmartManoj