agent-zero icon indicating copy to clipboard operation
agent-zero copied to clipboard

i am using a google gemini-2.5-flash-preview-05-20 and i set the api keys and also selected chatmodel as gemini-2.5-flash-preview-05-20 but still getting an api error and also tried with ollama but still getting same

Open Speed7dev opened this issue 8 months ago • 8 comments

Error

Text | Error code: 401 - {'error': {'message': 'Incorrect API key provided: None. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}} -- | --
Traceback (most recent call last):
Traceback (most recent call last):
  File "/a0/agent.py", line 290, in monologue
    prompt = await self.prepare_prompt(loop_data=self.loop_data)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/a0/agent.py", line 374, in prepare_prompt
    await self.call_extensions("message_loop_prompts_after", loop_data=loop_data)
  File "/a0/agent.py", line 725, in call_extensions
    await cls(agent=self).execute(**kwargs)
  File "/a0/python/extensions/message_loop_prompts_after/_91_recall_wait.py", line 13, in execute
    await task
  File "/usr/lib/python3.11/asyncio/futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
    future.result()
  File "/usr/lib/python3.11/asyncio/futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/usr/lib/python3.11/asyncio/tasks.py", line 267, in __step
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "/a0/python/extensions/message_loop_prompts_after/_50_recall_memories.py", line 60, in search_memories
    query = await self.agent.call_utility_model(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/a0/agent.py", line 579, in call_utility_model
    async for chunk in (prompt | model).astream({}):
  File "/opt/venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3465, in astream
    async for chunk in self.atransform(input_aiter(), config, **kwargs):
  File "/opt/venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3447, in atransform
    async for chunk in self._atransform_stream_with_config(
  File "/opt/venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2322, in _atransform_stream_with_config
    chunk = await coro_with_context(py_anext(iterator), context)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
    future.result()

>>>  15 stack lines skipped <<<

  File "/opt/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 591, in astream
    async for chunk in self._astream(
  File "/opt/venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 2025, in _astream
    async for chunk in super()._astream(*args, **kwargs):
  File "/opt/venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 890, in _astream
    response = await self.async_client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2028, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1742, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1549, in request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: None. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Error Text Error code: 401 - {'error': {'message': 'Incorrect API key provided: None. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}} Copy Traceback (most recent call last): Traceback (most recent call last): File "/[a0](http://localhost:32771/#)/[agent.py](http://localhost:32771/#)", line 290, in monologue prompt = await self.prepare_prompt(loop_data=self.loop_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/[a0](http://localhost:32771/#)/[agent.py](http://localhost:32771/#)", line 374, in prepare_prompt await self.call_extensions("message_loop_prompts_after", loop_data=loop_data) File "/[a0](http://localhost:32771/#)/[agent.py](http://localhost:32771/#)", line 725, in call_extensions await cls(agent=self).execute(**kwargs) File "/[a0](http://localhost:32771/#)/[python](http://localhost:32771/#)/[extensions](http://localhost:32771/#)/[message_loop_prompts_after](http://localhost:32771/#)/[_91_recall_wait.py](http://localhost:32771/#)", line 13, in execute await task File "/[usr](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[asyncio](http://localhost:32771/#)/[futures.py](http://localhost:32771/#)", line 287, in __await__ yield self # This tells Task to wait for completion. ^^^^^^^^^^ File "/[usr](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[asyncio](http://localhost:32771/#)/[tasks.py](http://localhost:32771/#)", line 339, in __wakeup future.result() File "/[usr](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[asyncio](http://localhost:32771/#)/[futures.py](http://localhost:32771/#)", line 203, in result raise self._exception.with_traceback(self._exception_tb) File "/[usr](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[asyncio](http://localhost:32771/#)/[tasks.py](http://localhost:32771/#)", line 267, in __step result = coro.send(None) ^^^^^^^^^^^^^^^ File "/[a0](http://localhost:32771/#)/[python](http://localhost:32771/#)/[extensions](http://localhost:32771/#)/[message_loop_prompts_after](http://localhost:32771/#)/[_50_recall_memories.py](http://localhost:32771/#)", line 60, in search_memories query = await self.agent.call_utility_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/[a0](http://localhost:32771/#)/[agent.py](http://localhost:32771/#)", line 579, in call_utility_model async for chunk in (prompt | model).astream({}): File "/[opt](http://localhost:32771/#)/[venv](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[site-packages](http://localhost:32771/#)/[langchain_core](http://localhost:32771/#)/[runnables](http://localhost:32771/#)/[base.py](http://localhost:32771/#)", line 3465, in astream async for chunk in self.atransform(input_aiter(), config, **kwargs): File "/[opt](http://localhost:32771/#)/[venv](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[site-packages](http://localhost:32771/#)/[langchain_core](http://localhost:32771/#)/[runnables](http://localhost:32771/#)/[base.py](http://localhost:32771/#)", line 3447, in atransform async for chunk in self._atransform_stream_with_config( File "/[opt](http://localhost:32771/#)/[venv](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[site-packages](http://localhost:32771/#)/[langchain_core](http://localhost:32771/#)/[runnables](http://localhost:32771/#)/[base.py](http://localhost:32771/#)", line 2322, in _atransform_stream_with_config chunk = await coro_with_context(py_anext(iterator), context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/[usr](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[asyncio](http://localhost:32771/#)/[futures.py](http://localhost:32771/#)", line 287, in __await__ yield self # This tells Task to wait for completion. ^^^^^^^^^^ File "/[usr](http://localhost:32771/#)/[lib](http://localhost:32771/#)/[python3.11](http://localhost:32771/#)/[asyncio](http://localhost:32771/#)/[tasks.py](http://localhost:32771/#)", line 339, in __wakeup future.result()

15 stack lines skipped <<<

File "/opt/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 591, in astream async for chunk in self._astream( File "/opt/venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 2025, in _astream async for chunk in super()._astream(*args, **kwargs): File "/opt/venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 890, in _astream response = await self.async_client.create(**payload) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2028, in create return await self._post( ^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1742, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1549, in request raise self._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: None. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

Speed7dev avatar Jun 01 '25 07:06 Speed7dev

I spent hours with the same issue last night and gave up after several hours. I got further than you though. There's API keys for gemini or google. make sure you creayte the correct one. Not sure if my region was the issue.

voxboxer avatar Jun 01 '25 19:06 voxboxer

I spent hours with the same issue last night and gave up after several hours. I got further than you though. There's API keys for gemini or google. make sure you creayte the correct one. Not sure if my region was the issue.

Not only the issue is with gemini the issue is with all llm model and model providers I fed up and tried with ollma also but still same issue

Speed7dev avatar Jun 01 '25 19:06 Speed7dev

models/embedding-001 (For model embedding)

gemini-1.5-flash gemini-1.5-pro (either of these for exact model names)

voxboxer avatar Jun 01 '25 20:06 voxboxer

had the same issue, tried using openrouters free APIs but never seemed to work. downloaded ollama local models and it worked (not very well lol). you have to have the EXACT name of model (can very from where you get it). i bit the bullet, and put $20 credit on openai and run the gpt-4.1 model and everything is working as it should.

Qu1ck1eNO1 avatar Jun 02 '25 01:06 Qu1ck1eNO1

had the same issue, tried using openrouters free APIs but never seemed to work. downloaded ollama local models and it worked (not very well lol). you have to have the EXACT name of model (can very from where you get it). i bit the bullet, and put $20 credit on openai and run the gpt-4.1 model and everything is working as it should.

Thank you very much bro this helps me a lot

Speed7dev avatar Jun 02 '25 04:06 Speed7dev

models/embedding-001 (For model embedding)

gemini-1.5-flash gemini-1.5-pro (either of these for exact model names)

I have a doubt if we are using chat models means why we should use embeding models If you reply this helps me a lot 🙏

Speed7dev avatar Jun 02 '25 04:06 Speed7dev

I use gemini-2.5-flash-preview-05-20 with limit RPM = 10 is ok !

Mr-Jack-Tung avatar Jun 02 '25 06:06 Mr-Jack-Tung

I use gemini-2.5-flash-preview-05-20 with limit RPM = 10 is ok !

yes i think it depends on your task

is the gemini-2.5-flash-preview-05-20 with agent 0 is correctly working for you

Speed7dev avatar Jun 03 '25 15:06 Speed7dev