langflow icon indicating copy to clipboard operation
langflow copied to clipboard

Cannot Run ollama embeddings

Open avi2000 opened this issue 3 months ago • 5 comments

Bug Description

I have a testing design of langflow 1.6 with milvus standalone installed in a docker, ollama and langflow desktop 1.6, nothing complicated, it looks all working individually. I tried to setup a simple RAG. First I tried langflow of a simple agent with ollama, working flawless. Then I tried to setup a RAG, and I wanted to use ollama embedding. It didn't bring any llm model even after refresh. I can say my ollama is working as LLM model, but not in embedding mode. I looked and I tried number of combinations, no chance. I have no issue openAI embedding, but ollama.

Reproduction

Steps to repdroduce:

  1. Drag ollama embeddings to the flow
  2. refresh the list
  3. nothing on the list

Expected behavior

I expected similar behavior when I drag ollama as a llm model, refresh, and to see the llm models I installed with ollama.

Who can help?

No response

Operating System

Windows 11

Langflow Version

1.6

Python Version

None

Screenshot

No response

Flow File

No response

avi2000 avatar Oct 19 '25 16:10 avi2000

Hello @avi2000, try this one https://github.com/yharby/langflow-ollama-pixi/blob/main/custom-langflow/components/ollama_custom/ollama_embeddings.py

yharby avatar Oct 20 '25 09:10 yharby

Hello @yharby, I just tried, but it didn't work. I have same result, refresh list is not pulling any model I have.

avi2000 avatar Oct 20 '25 19:10 avi2000

I don't know it this helps.

When using Ollama LLM, GETs ans POSTs [GIN] 2025/10/20 - 15:41:44 | 200 | 2.1222ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/20 - 15:42:54 | 200 | 5.4525ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/20 - 15:42:54 | 200 | 7.6839ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/20 - 15:42:55 | 200 | 1.862ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/20 - 15:42:55 | 200 | 56.0694ms | 127.0.0.1 | POST "/api/show" [GIN] 2025/10/20 - 15:42:55 | 200 | 41.4728ms | 127.0.0.1 | POST "/api/show"

in embeddings, it onlys GETs but not POSTs anything. [GIN] 2025/10/20 - 15:44:06 | 200 | 2.7053ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/20 - 15:44:07 | 200 | 2.843ms | 127.0.0.1 | GET "/api/tags" [GIN] 2025/10/20 - 15:44:07 | 200 | 3.9196ms | 127.0.0.1 | GET "/api/tags"

avi2000 avatar Oct 20 '25 19:10 avi2000

@Cristhianzl hey, can you help me to fix this please?

avi2000 avatar Oct 24 '25 21:10 avi2000

Hello @avi2000, try this one https://github.com/yharby/langflow-ollama-pixi/blob/main/custom-langflow/components/ollama_custom/ollama_embeddings.py

It's works, thanks!

chenyanchen avatar Nov 13 '25 10:11 chenyanchen

Hi @chenyanche @yharby

I tried and unfortunately it didn't work. Let me explain the steps I did:

  • Run Langflow Desktop 6.0 for Windows (default intallation)
  • Run ollama (ollama serve)
  • Dragged ollama as a LLM model to test, worked
  • Dragged ollama embeddings, edit the code, delete completely and added from the github, checked and save
  • Run, connected to the expected port, refresh list, nothing shows
  • i have only llama 3.1 which it's working

Is there anything missing here? I checked the gitlab and looks like it wants to me to install pixi which I didn't and I'm not sure even it's needed. because langflow is a desktop version, not runnign as Python.

I need to make a presentation, and this is essential part.

Can someone please once again? I already reached to langflow, but nobody responded.

TIA

avi2000 avatar Nov 15 '25 21:11 avi2000

In the most recent versions (1.6.0 and higher), the embedding components are working correctly.

You have two options for using Ollama embedding models:

  • Embedding Model component with Ollama provider
  • Ollama Embedding component

For correct operation, ensure:

  • The Ollama server is running.

  • The number of dimensions in the vector database is compatible with the model used.

Future improvements:

  • The default URL will be added to both components (currently it depends on the user);

  • The dimension count parameter will be added to the Ollama Embedding component (currently it uses the default of the model used).

Empreiteiro avatar Dec 09 '25 13:12 Empreiteiro

Hi Lucas,

Thank you for replying. As I explained, Ollama as an LLM model is working fine, but not as an embedding. I tested LM Studio, and the LLM model and embeddings are working without any issues. I cannot say I'm an experienced user, but there's something that is blocking. Just to confirm, yes Ollama server is running, and the LLM models are working too. Your second point is meaningless at the moment, because of not being able to pull the embeddings.

Regardless, thank you for your time to answer my questions.

Albert

On Tue, Dec 9, 2025 at 8:21 AM Lucas Democh @.***> wrote:

Empreiteiro left a comment (langflow-ai/langflow#10331) https://github.com/langflow-ai/langflow/issues/10331#issuecomment-3632242609

In the most recent versions (1.6.0 and higher), the embedding components are working correctly.

You have two options for using Ollama embedding models:

  • Embedding Model component with Ollama provider
  • Ollama Embedding component

For correct operation, ensure:

The Ollama server is running.

The number of dimensions in the vector database is compatible with the model used.

Future improvements:

The default URL will be added to both components (currently it depends on the user);

The dimension count parameter will be added to the Ollama Embedding component (currently it uses the default of the model used).

— Reply to this email directly, view it on GitHub https://github.com/langflow-ai/langflow/issues/10331#issuecomment-3632242609, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTUITL2MOBVWUEGNA26XPL4A3EHDAVCNFSM6AAAAACJTTR6Z2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTMMZSGI2DENRQHE . You are receiving this because you were mentioned.Message ID: @.***>

avi2000 avatar Dec 09 '25 21:12 avi2000

@avi2000 Can you test it on a more up-to-date version? My tests were done on the main and latest versions.

Empreiteiro avatar Dec 09 '25 22:12 Empreiteiro

Actually, since you're using the Desktop version, the most up-to-date version isn't publicly available yet.

I'll test it on the build that should be publicly released next week and I'll post the results here.

Empreiteiro avatar Dec 09 '25 22:12 Empreiteiro

@avi2000 I tested it on the main branch, release-1.7.0, and on the latest Desktop build (still not publicly available).

The problem has been fixed, and we will soon have a release that includes this fix.

https://github.com/user-attachments/assets/b6461021-808d-46e9-ba61-16258a95a5ff

Empreiteiro avatar Dec 17 '25 13:12 Empreiteiro