holmesgpt icon indicating copy to clipboard operation
holmesgpt copied to clipboard

[BUG/ERROR] Can only concatenate str (not "dict") to str

Open j3ramy opened this issue 1 year ago • 7 comments

Hi,

I got an interesting bug/error (?) when I try to execute holmes ask command. I installed holmesgpt 0.7.2 via brew on my MacOS with Python version 3.9.6. The installation was successful, and I can execute the holmes version command.

I am using Ollama as the LLM backend (even if it may be buggy as your documentation says).

Everytime when I try to execute holmes ask "Which pod is not running?" --model=ollama_chat/llama3.2:1b then it runs into the following error:

in completion:2804 in get_ollama_response:366
in token_counter:1638

Failed to execute script 'holmes' due to unhandled exception! TypeError: can only concatenate str (not "dict") to str

During handling of the above exception, another exception occurred: in ask:281
in prompt_call:80
in call:122 in completion:148 in wrapper:960 in wrapper:849
in completion:3065 ...

Is it because I am using Ollama or the Python version? Have you already experienced this error?

Thanks in advance!

j3ramy avatar Jan 13 '25 09:01 j3ramy

Hi, I suspect this is a bug with LiteLLM (one of our dependencies) and how they implement Ollama support - maybe this bug? https://github.com/BerriAI/litellm/issues/6958

Are you able to verify it doesn't happen with other models?

aantn avatar Jan 13 '25 11:01 aantn

Hi, I suspect this is a bug with LiteLLM (one of our dependencies) and how they implement Ollama support - maybe this bug? BerriAI/litellm#6958

Are you able to verify it doesn't happen with other models?

Actually it happens with llama3.2:latest as well...

If I use gemma2:2b then this is the console output:

holmes ask "Why ist my cluster not running?" --model=ollama_chat/gemma2:2b  
User: Why ist my cluster not running?
Couldn't find model's name ollama_chat/gemma2:2b in litellm's model list, fallback to 128k tokens for              llm.py:140
max_input_tokens                                                                                                             
Couldn't find model's name ollama_chat/gemma2:2b in litellm's model list, fallback to 4096 tokens for              llm.py:174
max_output_tokens                                                                                                            
AI: {}

It's not finding the LLM even if it exists:

ollama list
NAME               ID              SIZE      MODIFIED   
llama3.2:1b        baf6a787fdff    1.3 GB    2 days ago    
gemma2:2b          8ccf136fdd52    1.6 GB    2 days ago    
llama3.2:latest    a80c4f17acd5    2.0 GB    2 d                                       

j3ramy avatar Jan 13 '25 11:01 j3ramy

Just to update, this is now fixed in LiteLLM so it should be fixed in the next Holmes release after we updated LiteLLM.

aantn avatar Feb 07 '25 16:02 aantn

is this issue has fixed or any workaround support?

bientt210 avatar Jul 29 '25 16:07 bientt210

Hi, we just updated the litellm version. Does this still occur on master?

aantn avatar Aug 10 '25 10:08 aantn

Hi, we just updated the litellm version. Does this still occur on master?

Yes, I'm still getting a warning like this.

Image

bientt210 avatar Aug 12 '25 02:08 bientt210

Thanks @bientt210. Is this fixed with https://github.com/robusta-dev/holmesgpt/pull/839?

aantn avatar Aug 13 '25 17:08 aantn