How to use custom OLLAMA created model?
I already have a custom model built using OLLAMA's modelfile which pulls LLAMA3.1 and adds custom system prompt and template to it.
How do I use this model?
If I provide the name of this model in the config as shown below, it tries to fetch from the OLLAMA instead of picking up from the locally present version.
llm:
provider: ollama
config:
model: 'Custom:0.1.0'
temperature: 0.5
top_p: 1
stream: True
base_url: 'http://localhost:11434'
embedder:
provider: ollama
config:
model: mxbai-embed-large
base_url: 'http://localhost:11434'
What is the right way to perform this action? Cannot find anything from the documentation. Perhaps I could write its documentation once I find the answer.
I have the same issue.
Hey @Japkeerat @mbayabo I tried using base llama3.1:8b which got downloaded for first time and then it used fetched from local itself.
Can you please share more detail on the issue or you can take a look at code and raise a PR for it.
Found the issue on my end, apologies.
Use this config template and make required changes according to your models:
config = { "vector_store": { "provider": "qdrant", "config": { "collection_name": "test", "host": "localhost", "port": 6333, "embedding_model_dims": 768, # For Nomic, change this according to your local models dimensions }, }, "llm": { "provider": "ollama", "config": { "model": "llama3.1:latest", "temperature": 0, "max_tokens": 8000, "ollama_base_url": "http://localhost:11434", # Ensure this is correct }, }, "embedder": { "provider": "ollama", "config": { "model": "nomic-embed-text:latest", # "model": "snowflake-arctic-embed:latest", "ollama_base_url": "http://localhost:11434", }, }, }
m = Memory.from_config(config) m.add("I'm visiting Paris", user_id="john")
I have already created a pull request to update the docs. https://github.com/mem0ai/mem0/pull/1690/commits
Closing this issue as problem is fixed. Please feel free to reopen if there's any problem regarding this.