Ollama Reranker support
Validations
- [X] I believe this is a way to improve. I'll try to join the Continue Discord for questions
- [X] I'm not able to find an open issue that requests the same enhancement
Problem
The docs suggest that rerankers are not supported locally. The feature is not yet available under ollama but seems to be in active development. This issue is to track / link to the ollama issues.
Solution
Related Ollama feature issue https://github.com/ollama/ollama/issues/3368 needs to be resolved, then any needed changes to Continue and the docs can be implemented.
Thanks for creating this issue to track things @drcrallen ! Sounds like it's close to being merged in Ollama, excited to get this built.
We'll want to at least update these docs: https://docs.continue.dev/customize/model-providers/ollama#reranking-model
@sestinj FYI, I've just begun working on this ticket.
Just a quick update: I'm still awaiting official support from Ollama for the Ranker feature, as well as the addition of models to test and finalize my code: https://github.com/continuedev/continue/compare/dev...malaki12003:continue:ollama-reranker?expand=1
Curious if there are any updates on this! Thanks
@magiusdarrigo doesn't appear to be any updates on the upstream issue: https://github.com/ollama/ollama/issues/3368
Check this out: https://ollama.com/qllama/bge-reranker-v2-m3
@iomatix does it work? I mean, what config.json should i use to tell continue to use that one?
Hi, I was wondering if there's any follow up on this. When I try to use "qllama/bge-reranker-v2-m3" provided by Ollama, the Continue VS Code extension won't allow me, issuing a warning saying "Warning: Unsupported reranking model provider found: ollama", even though I've got the BGE reranker running and have successfully used it before with OpenWebUI+Ollama.
The same issue for me. If anyone was successful in making a right config for local embedding model and local reranker - please post it, will be very thankful.
The same issue for me. If anyone was successful in making a right config for local embedding model and local reranker - please post it, will be very thankful.
I ended up using Text Embeddings Inference (Hugging face, https://huggingface.co/docs/text-embeddings-inference/en/index) with a local BGE reranker on my Mac. That's supported by Continue for reranking (see: https://docs.continue.dev/customize/model-roles/reranking#text-embeddings-inference).
You can pull reranker models from hugginface by Ollama:
ollama pull hf.co./...
Shouldn't continue be able to connect to those pulled models?
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.
This issue hasn't been updated in 90 days and will be closed after an additional 10 days without activity. If it's still important, please leave a comment and share any new information that would help us address the issue.