crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

[BUG] ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable

Open MohamedAliRashad opened this issue 1 year ago • 1 comments

Description

This error ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable is showing to me when i try using vllm model

Steps to Reproduce

  1. Host a model usin vllm serve
  2. Use this
os.environ["OPENAI_API_KEY"] = "NA"
model = LOAI(base_url="http://0.0.0.0:8000/v1", model_name = "your_model_name")

Expected behavior

To work

Screenshots/Code snippets

Nothing

Operating System

Ubuntu 20.04

Python Version

3.10

crewAI Version

0.86.0

crewAI Tools Version

no idea

Virtual Environment

Venv

Evidence

Provider List: https://docs.litellm.ai/docs/providers

ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable

Possible Solution

No idea

Additional context

Nothin

MohamedAliRashad avatar Dec 18 '24 12:12 MohamedAliRashad

When using this YAML file, the error does not occur:

name: crew-ai
channels:
  - conda-forge
  - defaults
  - ryanvolz
dependencies:
  - python=3.11
  - pip
  - pip:
    - crewai==0.28.8
    - crewai-tools==0.1.6
    - langchain_community==0.0.29

However, after running pip3 install --upgrade crewai crewai-tools

The error does happen.

MadGeometer avatar Jan 01 '25 03:01 MadGeometer

Is this bug has been resolved? I actually got this bug while creating an agent.

ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable Application failed to start: 1 validation error for Crew

prashant3286 avatar Jan 21 '25 11:01 prashant3286

Try this:

llm = LLM(
    model="meta-llama/llama-3.2-1b-instruct:free",
    temperature=0.7,
    base_url="https://openrouter.ai/api/v1",
    api_key="your api model key"
)

After this, pass llm=llm as a parameter in the Agent

Tanishqbot avatar Jan 23 '25 08:01 Tanishqbot

Hi there, what if I am using local LLM such as:

llm_deepseek = Ollama(
    model="deepseek-r1:1.5b",
    base_url="http://localhost:11434",
    temperature=0.7,
)

... and got the same:

ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable

vlpacheco avatar Feb 01 '25 19:02 vlpacheco

You don’t need to use langchain constructor as CrewAI uses LiteLLM internally. Use the LLM class like this:

llm=LLM(model=“deepseek-r1:1.5b”, base_url=“http://localhost:11434/”)

Tanishqbot avatar Feb 04 '25 12:02 Tanishqbot

I get the same error when using LMStudio for local inference :

@CrewBase
class Crew1Test():
       llm = LLM(model="lm_studio/meta-llama-3-8b-instruct", 
                   api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
       @agent
	def researcher(self) -> Agent:
		return Agent(
			config=self.agents_config['researcher'],
			verbose=True,
			llm=self.llm
		)

but when I use Ollama, it doesn't occur (inferencing via Ollama):

...
  llm=LLM(model="ollama/llama3.2:latest", base_url="http://localhost:11434")
...

also, when changing the model provider to openai/llama3.2:latest, the error does not occur (inferencing via LMStudio):

...
llm = LLM(api_key="fsdf", model="openai/meta-llama-3-8b-instruct",  base_url="http://localhost:1234/v1", temperature=0.7)
...

So it seems, that the error is triggered only when using the lm_studio model provider configuration. Probably needs to be fixed by liteLLM?

crewai info:

crew1_test % pip freeze | grep crewai
crewai==0.100.1
crewai-tools==0.33.0

ltieLLM info:

pip freeze | grep lite  
litellm==1.59.8

ireicht avatar Feb 06 '25 22:02 ireicht

try to add ollama/ before any model you use.

example:

@CrewBase
class Crew1Test():
       llm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct", 
                   api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
       @agent
	def researcher(self) -> Agent:
		return Agent(
			config=self.agents_config['researcher'],
			verbose=True,
			llm=self.llm
		)
@CrewBase
class Crew1Test():
       llm = LLM(model="lm_studio/meta-llama-3-8b-instruct", 
                   api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
       @agent
	def researcher(self) -> Agent:
		return Agent(
			config=self.agents_config['researcher'],
			verbose=True,
			llm=self.llm
		)

AbdeslemSmahi avatar Feb 08 '25 04:02 AbdeslemSmahi

thank you for your suggestion. changing the argument ... llm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct", ... results into this output:

Running the Crew
LLM value is already an LLM object
LLM value is already an LLM object
# Agent: LLM for Beginners Senior Data Researcher
## Task: Conduct a thorough research about LLM for Beginners Make sure you find any interesting and relevant information given the current year is 2025.



LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

ERROR:root:LiteLLM call failed: litellm.APIConnectionError: 'response'
Traceback (most recent call last):
  File "/Users/reicht/Developer/onTheGo/crewai_test/crew1_test/.venv/lib/python3.12/site-packages/litellm/main.py", line 2678, in completion
    response = base_llm_http_handler.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/reicht/Developer/onTheGo/crewai_test/crew1_test/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 334, in completion
    return provider_config.transform_response(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/reicht/Developer/onTheGo/crewai_test/crew1_test/.venv/lib/python3.12/site-packages/litellm/llms/ollama/completion/transformation.py", line 273, in transform_response
    model_response.choices[0].message.content = response_json["response"]  # type: ignore
                                                ~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'response'


So for now I'll use the

... llm = LLM(model="openai/meta-llama-3-8b-instruct", ...

approach. Any suggestions on where to initialize the llm variable? Inside the class like in the code above:

@CrewBase
class Crew1Test():
       llm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct", 
       api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
       
        @agent
	def researcher(self) -> Agent:
		return Agent(
			config=self.agents_config['researcher'],
			verbose=True,
			llm=self.llm
		)
...

or

myllm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct", 
       api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)

@CrewBase
class Crew1Test():
      
       
        @agent
	def researcher(self) -> Agent:
		return Agent(
			config=self.agents_config['researcher'],
			verbose=True,
			llm=myllm
		)
...

Any preference?

ireicht avatar Feb 10 '25 18:02 ireicht

So for LM-Studio only

openai/model-name

seems to work without errors. I guess litellm's lm_studio is somehow broken?

VeMeth avatar Feb 14 '25 00:02 VeMeth

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Mar 22 '25 12:03 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Mar 27 '25 12:03 github-actions[bot]