Ollama Support
Hi I was wondering if you'd be adding Ollama support, similar to the Llama.cpp support you have?
Thanks!
It is not something I am currently considering but it should be relatively straightforward. There are a number of examples here including the llama.cpp one.
If you want, a PR would be much appreciated!
It is not something I am currently considering but it should be relatively straightforward. There are a number of examples here including the llama.cpp one.
If you want, a PR would be much appreciated!
If I get to implementing it myself, I'll submit a PR!
Ollama is now compatible with the OpenAI chat completion API, which means you should be able to use the ChatGPT python client to access Ollama https://ollama.com/blog/openai-compatibility
import openai
client = openai.OpenAI(
base_url = 'http://localhost:11434/v1', #wherever ollama is running
api_key='ollama', # required, but unused
)
from bertopic.representation import OpenAI
from bertopic import BERTopic
# Create your representation model
representation_model = OpenAI(client, delay_in_seconds=5, model='llama2')
# Use the representation model in BERTopic on top of the default pipeline
topic_model = BERTopic(representation_model=representation_model)
Since the openai integration just calls client.chat.completions.create under the hood this should just work. Alternatively you can use the existing langchain integration to indirectly call through to ollama:
from langchain_community.llms import Ollama
from langchain.chains.question_answering import load_qa_chain
from bertopic.representation import LangChain
llm = Ollama(model="llama2")
chain = load_qa_chain(llm, chain_type="stuff")
# Use the representation model in BERTopic on top of the default pipeline
representation_model = LangChain(chain)
topic_model = BERTopic(representation_model=representation_model)
Thanks for sharing this! I'll make sure to add this to the documentation.
If you point me to the right place in the documentation I can create a PR for this.
@elshimone Thanks, that would be great! I imagine the best place for this would be somewhere above/below the lama.cpp documentation here: https://maartengr.github.io/BERTopic/getting_started/representation/llm.html#llamacpp
Ollama is now compatible with the OpenAI chat completion API, which means you should be able to use the ChatGPT python client to access Ollama https://ollama.com/blog/openai-compatibility
import openai client = openai.OpenAI( base_url = 'http://localhost:11434/v1', #wherever ollama is running api_key='ollama', # required, but unused ) from bertopic.representation import OpenAI from bertopic import BERTopic # Create your representation model representation_model = OpenAI(client, delay_in_seconds=5, model='llama2') # Use the representation model in BERTopic on top of the default pipeline topic_model = BERTopic(representation_model=representation_model)Since the openai integration just calls client.chat.completions.create under the hood this should just work. Alternatively you can use the existing langchain integration to indirectly call through to ollama:
from langchain_community.llms import Ollama from langchain.chains.question_answering import load_qa_chain from bertopic.representation import LangChain llm = Ollama(model="llama2") chain = load_qa_chain(llm, chain_type="stuff") # Use the representation model in BERTopic on top of the default pipeline representation_model = LangChain(chain) topic_model = BERTopic(representation_model=representation_model)
Hey! Thanks for this, I realized that there's LangChain support for BERTopic which in turn has Ollama integration, so that's what I've been using as well! I should have updated this issue, I apologize.