[BUG] ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable
Description
This error ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable is showing to me when i try using vllm model
Steps to Reproduce
- Host a model usin vllm serve
- Use this
os.environ["OPENAI_API_KEY"] = "NA"
model = LOAI(base_url="http://0.0.0.0:8000/v1", model_name = "your_model_name")
Expected behavior
To work
Screenshots/Code snippets
Nothing
Operating System
Ubuntu 20.04
Python Version
3.10
crewAI Version
0.86.0
crewAI Tools Version
no idea
Virtual Environment
Venv
Evidence
Provider List: https://docs.litellm.ai/docs/providers
ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable
Possible Solution
No idea
Additional context
Nothin
When using this YAML file, the error does not occur:
name: crew-ai
channels:
- conda-forge
- defaults
- ryanvolz
dependencies:
- python=3.11
- pip
- pip:
- crewai==0.28.8
- crewai-tools==0.1.6
- langchain_community==0.0.29
However, after running
pip3 install --upgrade crewai crewai-tools
The error does happen.
Is this bug has been resolved? I actually got this bug while creating an agent.
ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable Application failed to start: 1 validation error for Crew
Try this:
llm = LLM(
model="meta-llama/llama-3.2-1b-instruct:free",
temperature=0.7,
base_url="https://openrouter.ai/api/v1",
api_key="your api model key"
)
After this, pass llm=llm as a parameter in the Agent
Hi there, what if I am using local LLM such as:
llm_deepseek = Ollama(
model="deepseek-r1:1.5b",
base_url="http://localhost:11434",
temperature=0.7,
)
... and got the same:
ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable
You don’t need to use langchain constructor as CrewAI uses LiteLLM internally. Use the LLM class like this:
llm=LLM(model=“deepseek-r1:1.5b”, base_url=“http://localhost:11434/”)
I get the same error when using LMStudio for local inference :
@CrewBase
class Crew1Test():
llm = LLM(model="lm_studio/meta-llama-3-8b-instruct",
api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
llm=self.llm
)
but when I use Ollama, it doesn't occur (inferencing via Ollama):
...
llm=LLM(model="ollama/llama3.2:latest", base_url="http://localhost:11434")
...
also, when changing the model provider to openai/llama3.2:latest, the error does not occur (inferencing via LMStudio):
...
llm = LLM(api_key="fsdf", model="openai/meta-llama-3-8b-instruct", base_url="http://localhost:1234/v1", temperature=0.7)
...
So it seems, that the error is triggered only when using the lm_studio model provider configuration. Probably needs to be fixed by liteLLM?
crewai info:
crew1_test % pip freeze | grep crewai
crewai==0.100.1
crewai-tools==0.33.0
ltieLLM info:
pip freeze | grep lite
litellm==1.59.8
try to add ollama/ before any model you use.
example:
@CrewBase
class Crew1Test():
llm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct",
api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
llm=self.llm
)
@CrewBase class Crew1Test(): llm = LLM(model="lm_studio/meta-llama-3-8b-instruct", api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7) @agent def researcher(self) -> Agent: return Agent( config=self.agents_config['researcher'], verbose=True, llm=self.llm )
thank you for your suggestion. changing the argument
... llm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct", ...
results into this output:
Running the Crew
LLM value is already an LLM object
LLM value is already an LLM object
# Agent: LLM for Beginners Senior Data Researcher
## Task: Conduct a thorough research about LLM for Beginners Make sure you find any interesting and relevant information given the current year is 2025.
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
ERROR:root:LiteLLM call failed: litellm.APIConnectionError: 'response'
Traceback (most recent call last):
File "/Users/reicht/Developer/onTheGo/crewai_test/crew1_test/.venv/lib/python3.12/site-packages/litellm/main.py", line 2678, in completion
response = base_llm_http_handler.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/reicht/Developer/onTheGo/crewai_test/crew1_test/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/llm_http_handler.py", line 334, in completion
return provider_config.transform_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/reicht/Developer/onTheGo/crewai_test/crew1_test/.venv/lib/python3.12/site-packages/litellm/llms/ollama/completion/transformation.py", line 273, in transform_response
model_response.choices[0].message.content = response_json["response"] # type: ignore
~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'response'
So for now I'll use the
... llm = LLM(model="openai/meta-llama-3-8b-instruct", ...
approach. Any suggestions on where to initialize the llm variable? Inside the class like in the code above:
@CrewBase
class Crew1Test():
llm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct",
api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
llm=self.llm
)
...
or
myllm = LLM(model="ollama/lm_studio/meta-llama-3-8b-instruct",
api_key="fsdf", base_url="http://localhost:1234/v1", temperature=0.7)
@CrewBase
class Crew1Test():
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
llm=myllm
)
...
Any preference?
So for LM-Studio only
openai/model-name
seems to work without errors. I guess litellm's lm_studio is somehow broken?
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.