crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

Passing `memory=True` reaches out to Open AI, even when running locally with Ollama

Open GregHilston opened this issue 1 year ago • 6 comments

Hey all, I thought I was having the same problem as described by this previously closed issue:

https://github.com/joaomdmoura/crewAI/issues/21

it turns out that I was actually experiencing the following error stack when attempting to run with memory=True in my Crew:

Traceback (most recent call last):
  File "/app/main.py", line 34, in <module>
    result = crew.kickoff()
             ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/crew.py", line 252, in kickoff
    result = self._run_sequential_process()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/crew.py", line 293, in _run_sequential_process
    output = task.execute(context=task_output)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/task.py", line 173, in execute
    result = self._execute(
             ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/task.py", line 182, in _execute
    result = agent.execute_task(
             ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/agent.py", line 207, in execute_task
    memory = contextual_memory.build_context_for_task(task, context)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/memory/contextual/contextual_memory.py", line 22, in build_context_for_task
    context.append(self._fetch_stm_context(query))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/memory/contextual/contextual_memory.py", line 31, in _fetch_stm_context
    stm_results = self.stm.search(query)
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/memory/short_term/short_term_memory.py", line 23, in search
    return self.storage.search(query=query, score_threshold=score_threshold)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/crewai/memory/storage/rag_storage.py", line 90, in search
    else self.app.search(query, limit)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/embedchain/embedchain.py", line 631, in search
    return [{"context": c[0], "metadata": c[1]} for c in self.db.query(**params)]
                                                         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/embedchain/vectordb/chroma.py", line 220, in query
    result = self.collection.query(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 327, in query
    valid_query_embeddings = self._embed(input=valid_query_texts)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 633, in _embed
    return self._embedding_function(input=input)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/chromadb/api/types.py", line 193, in __call__
    result = call(self, input)
             ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/chromadb/utils/embedding_functions.py", line 188, in __call__
    embeddings = self._client.create(
                 ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/resources/embeddings.py", line 113, in create
    return self._post(
           ^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 980, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: fake. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Here's a snippet of how I was looking to run my Crew:

from dotenv import load_dotenv

load_dotenv()

from agents import Agents
from crewai import Task, Crew

# Prepare our agents
agents = Agents()
researcher_agent = agents.researcher()
writer_agent = agents.writer()

# Define our tasks
task1 = Task(
  description='Conduct a comprehensive analysis.',
  expected_output='Full analysis report in bullet points of the marijuana industry in the united states.',
  agent=researcher_agent
)
task2 = Task(
  description="Using the insights provided, develop an engaging blog post that highlights the most significant concepts of the marijuana industry in the US. Your post should be informative yet accessible, catering to an internet audience. Make it sound cool, avoid complex words so it doesn't sound like AI.",
  expected_output='Full blog post of at least 4 paragraphs',
  agent=writer_agent
)

# Configure out crew
print(f'The crew is being configured with the {agents.model} model')
crew = Crew(
  agents=[researcher_agent, writer_agent],
  tasks=[task1, task2],
  memory=True, # Note this is the line that if not removed, causes the issue above
)

# Run our crew
result = crew.kickoff()
print(result)

# Investigate how the crew did
print(crew.usage_metrics)

Figured I'd share what I found in case it saves anyone a few hours ;)

GregHilston avatar Apr 09 '24 01:04 GregHilston

Hi you can refer to this issue https://github.com/joaomdmoura/crewAI/issues/105#issuecomment-2044883945 for an explanation of why this issue arises. Thanks

punitchauhan771 avatar Apr 10 '24 04:04 punitchauhan771

The referred comment actually does not solve the issue, it just shows how it works with Gemini as a model. The problem reported which is also mine, it is how to make that works with local models like Ollama. Is this Memory tailored to work with Ollama as well even though if necessary a custom embedding? If yes, how is it looking like?

brunoreisportela avatar Apr 18 '24 10:04 brunoreisportela

Hi @brunoreisportela, The comment only explains why this error arises, if you want to make embedding work with ollama embeddings models, you can't as of now the library used by crewai for embeddings (embedchain) doesn't support ollama embeddings but you can use huggingface embeddings if you want.

embeddings : https://docs.crewai.com/core-concepts/Memory/#using-openai-embeddings-already-default

test_crew = Crew(
            agents=[reader, writer],
            tasks=[read_book, write_report],
            process=Process.sequential,
            cache=True,
            verbose=2,
            memory=True,
            embedder={
                "provider": "huggingface",
                "config": {
                    "model": "mixedbread-ai/mxbai-embed-large-v1", # https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1
                }
            }
        )

punitchauhan771 avatar Apr 18 '24 10:04 punitchauhan771

Hi @brunoreisportela, The comment only explains why this error arises, if you want to make embedding work with ollama embeddings models, you can't as of now the library used by crewai for embeddings (embedchain) doesn't support ollama embeddings but you can use huggingface embeddings if you want.

embeddings : https://docs.crewai.com/core-concepts/Memory/#using-openai-embeddings-already-default

test_crew = Crew(
            agents=[reader, writer],
            tasks=[read_book, write_report],
            process=Process.sequential,
            cache=True,
            verbose=2,
            memory=True,
            embedder={
                "provider": "huggingface",
                "config": {
                    "model": "mixedbread-ai/mxbai-embed-large-v1", # https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1
                }
            }
        )

Amazing. Thanks for further explaining it.

brunoreisportela avatar Apr 18 '24 12:04 brunoreisportela

It should support an open interface for embedding model integration similar to LLMs, especially for local deployment. I haven't observed such support currently; if it exists, please inform me—would be immensely grateful!

chaofanat avatar Jul 01 '24 14:07 chaofanat

Hi @brunoreisportela, The comment only explains why this error arises, if you want to make embedding work with ollama embeddings models, you can't as of now the library used by crewai for embeddings (embedchain) doesn't support ollama embeddings but you can use huggingface embeddings if you want.

embeddings : https://docs.crewai.com/core-concepts/Memory/#using-openai-embeddings-already-default

test_crew = Crew(
            agents=[reader, writer],
            tasks=[read_book, write_report],
            process=Process.sequential,
            cache=True,
            verbose=2,
            memory=True,
            embedder={
                "provider": "huggingface",
                "config": {
                    "model": "mixedbread-ai/mxbai-embed-large-v1", # https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1
                }
            }
        )

I believe embedchain does support ollama now

moazeniz avatar Aug 03 '24 22:08 moazeniz

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Sep 03 '24 12:09 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Sep 09 '24 12:09 github-actions[bot]