mem0 icon indicating copy to clipboard operation
mem0 copied to clipboard

How to use custom OLLAMA created model?

Open Japkeerat opened this issue 1 year ago • 3 comments

I already have a custom model built using OLLAMA's modelfile which pulls LLAMA3.1 and adds custom system prompt and template to it.

How do I use this model?

If I provide the name of this model in the config as shown below, it tries to fetch from the OLLAMA instead of picking up from the locally present version.

llm:
  provider: ollama
  config:
    model: 'Custom:0.1.0'
    temperature: 0.5
    top_p: 1
    stream: True
    base_url: 'http://localhost:11434'
embedder:
  provider: ollama
  config:
    model: mxbai-embed-large
    base_url: 'http://localhost:11434'

What is the right way to perform this action? Cannot find anything from the documentation. Perhaps I could write its documentation once I find the answer.

Japkeerat avatar Aug 08 '24 18:08 Japkeerat

I have the same issue.

mikkothegeeko avatar Aug 08 '24 19:08 mikkothegeeko

Hey @Japkeerat @mbayabo I tried using base llama3.1:8b which got downloaded for first time and then it used fetched from local itself.

Can you please share more detail on the issue or you can take a look at code and raise a PR for it.

Dev-Khant avatar Aug 09 '24 11:08 Dev-Khant

Found the issue on my end, apologies.

Japkeerat avatar Aug 10 '24 03:08 Japkeerat

Use this config template and make required changes according to your models:

config = { "vector_store": { "provider": "qdrant", "config": { "collection_name": "test", "host": "localhost", "port": 6333, "embedding_model_dims": 768, # For Nomic, change this according to your local models dimensions }, }, "llm": { "provider": "ollama", "config": { "model": "llama3.1:latest", "temperature": 0, "max_tokens": 8000, "ollama_base_url": "http://localhost:11434", # Ensure this is correct }, }, "embedder": { "provider": "ollama", "config": { "model": "nomic-embed-text:latest", # "model": "snowflake-arctic-embed:latest", "ollama_base_url": "http://localhost:11434", }, }, }

m = Memory.from_config(config) m.add("I'm visiting Paris", user_id="john")


I have already created a pull request to update the docs. https://github.com/mem0ai/mem0/pull/1690/commits

SamuelDevdas avatar Aug 12 '24 22:08 SamuelDevdas

Closing this issue as problem is fixed. Please feel free to reopen if there's any problem regarding this.

Dev-Khant avatar Aug 14 '24 17:08 Dev-Khant