cursor icon indicating copy to clipboard operation
cursor copied to clipboard

Ollama (or other OpenAI API compatible local LLMs) support

Open micheleoletti opened this issue 1 year ago • 64 comments

Knowing that Ollama server supports OpenAI API(https://ollama.com/blog/openai-compatibility), the goal is to point Cursor to query the local Ollama server.

My setup is pretty simple:

  • Macbook Air M1
  • Ollama running locally + llama2

I added a llama2 model, set "ollama" as API key(not used but needed apparently), and overridden the base URL to point to localhost. image

But it does not work: image

If I try to verify the API key it seems like it cannot reach localhost: image

But if I try the provided test snippet in the terminal, it works correctly: image

So it seems like Cursor internal service is not able to perform the fetch to localhost.

Is there something conceptually wrong with my plan and its implementation? Did anybody manage to make this configuration work?

micheleoletti avatar Apr 04 '24 11:04 micheleoletti

I also ran into this just now, is there a way to make it work?

tosh avatar Apr 07 '24 17:04 tosh

+1

andreaspoldi avatar Apr 11 '24 09:04 andreaspoldi

Same here :/

TazzerMAN avatar Apr 19 '24 14:04 TazzerMAN

Ollama Phi3 in the ⌘ K 🚀

Takes some effort but it's fast and works well!

cursor-proxy

kcolemangt avatar Apr 29 '24 04:04 kcolemangt

@kcolemangt looks amazing! what did you put in the settings to do that? I can't get it to work 🤔

micheleoletti avatar Apr 29 '24 09:04 micheleoletti

Had to write a custom router and point Cursor’s OpenAI Base URL to that. Thinking of releasing it—want to try?

kcolemangt avatar Apr 29 '24 12:04 kcolemangt

I see... yeah I'd be interested in trying that out! On what hardware are you running ollama by the way?

micheleoletti avatar Apr 29 '24 18:04 micheleoletti

released llm-router. lmk how it works for you @micheleoletti

kcolemangt avatar Apr 30 '24 03:04 kcolemangt

Cursor does not like when you specify a port in the "Override OpenAI Base URL" field.

If you serve ollama on the default http (80) port, it starts working: OLLAMA_ORIGINS=* OLLAMA_HOST=127.0.0.1:80 (sudo -E) ollama serve

Then you can put http://localhost/v1 under "Override OpenAI Base URL" in Cursor.


UPD: For some reason, Cursor is trying to hit /v1/models endpoint which is not implemented in Ollama:

[GIN] 2024/05/30 - 11:45:40 | 404 |     200.375µs |       127.0.0.1 | GET      "/v1/models"

That causes this error message: image


UPD 2: Even after creating a compatible /v1/models endpoint in Ollama, Cursor still refuses to work:

~ curl -k http://127.0.0.1/v1/models
{"data":[{"created":1715857935,"id":"llama3:latest","object":"model","owned_by":"organization-owner"}],"object":"list"}
[GIN] 2024/05/30 - 15:55:17 | 200 |    1.221584ms |       127.0.0.1 | GET      "/v1/models"

In Cursor's Dev Tools I get:

ConnectError: [not_found] Model llama3:latest not found

Tried both llama3 and llama3:latest.

Seems like Cursor has something hardcoded for localhost/127.0.0.1 address.

yeralin avatar May 29 '24 23:05 yeralin

there is the repo https://github.com/ryoppippi/proxy-worker-for-cursor-with-ollama which is an option (but not for offline use). The readme also points out there is direct communication to cursor server, which is likely why cursor haven't enabled an easy to use option for true local LLM.

flight505 avatar Jun 27 '24 21:06 flight505

+1

I'm getting an error when using a custom URL even if the command works on the terminal

axsaucedo avatar Sep 05 '24 05:09 axsaucedo

+1

prakashchokalingam avatar Sep 11 '24 09:09 prakashchokalingam

released llm-router. lmk how it works for you @micheleoletti

Does this work on Windows? I'm wondering how to install?

hgoona avatar Sep 11 '24 10:09 hgoona

I've successfully configured Curxy for this mission https://github.com/ryoppippi/curxy

henry2man avatar Sep 17 '24 13:09 henry2man

I've successfully configured Curxy for this mission https://github.com/ryoppippi/curxy

Hi @henry2man can you please elaborate on how you got it to work on Windows? I'm trying to use it with Groq Cloud and also Ollama (locally)

hgoona avatar Sep 18 '24 03:09 hgoona

Hi @henry2man can you please elaborate on how you got it to work on Windows? I'm trying to use it with Groq Cloud and also Ollama (locally

Hi! I actually don’t use Windows, so I can’t share direct experience with that. However, I did notice that to get it working with CURXY and Ollama on MacOS, it’s crucial to enable the OpenAI custom API Key. Once that's done, you can override the OpenAI Base URL by adding a valid API key. Said that, it's a matter of time follow CURXY Readme instructions...

henry2man avatar Sep 18 '24 05:09 henry2man