Can't use crew with "memory=true" with AzureOpenAI
I wanted to enable crew memory with AzureOpenAI by adding the embedding based on https://docs.crewai.com/core-concepts/Memory/ as following. I have also defined the .env with my AzureOpenAI endpoint and key and used them successfully for reaserche LLM in the same main.py code
do you have any ideas why the following is giving the following error Unauthorized. Access token
tech_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential, # Optional: Sequential task execution is default
memory=True,
embedder={
"provider": "azure_openai",
"config":{
"model": 'text-embedding-ada-002',
"deployment_name": "text-embedding-ada-002"
}
},
cache=True,
max_rpm=100,
share_crew=True
)
the error is : File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/openai/_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}
I am having this same issue, I have not yet figured out a resolution.
I believe I have it working now - I included a parameter called api_key in the config from my Azure account.
"config":{ "model": 'text-embedding-ada-002', "deployment_name": "text-embedding-ada-002" "api_key": os.environ.get("AZURE_OPENAI_KEY") }
I have the same issue. It works when I disable the "Memory" feature. I am using the AzureChatOpenAI class to create LLM Model for the agents.
@fkucuk, @ziki99 here my solution
After some investigation, I realized that CrewAI uses the embedchain library for embedding and that by using embedchain with the correct environment variables set in my .env file, the issue was resolved. Specifically, I set the following environment variables:
.env (file)
OPENAI_API_TYPE="azure"
OPENAI_API_VERSION="xxx"
AZURE_OPENAI_ENDPOINT="xxx"
OPENAI_API_KEY="xxx"
now both the agent based on Azure and memory based on Azuer is working well
app.py (file)
import os from dotenv import load_dotenv
from crewai import Agent, Task, Crew from langchain_openai import AzureChatOpenAI
load_dotenv()
_llm = AzureChatOpenAI( api_version=os.environ.get("OPENAI_API_VERSION"), azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"), api_key=os.environ.get("OPENAI_API_KEY"), azure_deployment="xxx"
)
crew = Crew(
agents=[a1],
tasks=[t1],
verbose=2,
memory=True,
embedder={
"provider": "azure_openai",
"config": {"model": 'text-embedding-ada-002',
"deployment_name": "text-embedding-ada-002"}
}
)
@fantinis you are right. It is a great flexibility to use a different LLM for every agent. But it has to be the same LLM if you are using the "Memory" feature. It makes sense.
But the error message is misleading :)
Thanks for the solution @fantinis - I'm starting out and also faced this issue.
In order to make Chroma to work with azureOpenAI i have configured embedding_function = OpenAIEmbeddingFunction(
api_key='<your api_key>',
model_name='<azure embedding model name>',
api_type='azure', # this has to be set to azure
api_base='<put your api azure url>',
api_version="<put your>")
try passing those parameters to your embedder
for regular local usage out of azure it is sufficient to pass just api_key and model_name, but for azure, api_type='azure', api_base, and api_version must be set.
what if my api_base is different for chat-model and embedding model ,along with token. I am able to use .env file to read and create a langchain azure llm , but how to do I configure memory for crew with azure embedding
Hey folks, just got to look into this, thanks for all the investigation, I'm getting someone in the team to look into this one sooner than later <3