Pydantic ValidationError: tool_calls.function.arguments expects dict, receives str from server
Description:
When using the ollama Python library (version 0.4.7) with an Ollama model that supports tool calling (e.g., llama3.2-3b), a pydantic.ValidationError occurs during the parsing of the chat response if the model returns a tool call.
Problem:
The Ollama server appears to return the arguments field within message.tool_calls[].function as a JSON string (e.g., '{"query": "SELECT ..."}' or '{}'). However, the Pydantic models used within the ollama library (specifically related to ChatResponse and its nested Message / ToolCall models) expect the arguments field to be a Python dictionary (dict). This type mismatch leads to a validation failure.
Steps to Reproduce:
Use ollama.chat() with a model supporting tool calls. Provide tools via the tools parameter. Send a prompt that triggers a tool call from the model. The Ollama server returns a response including message.tool_calls. The ollama library attempts to parse this response using its Pydantic models.
Observed Error:
pydantic.ValidationError: 1 validation error for Message tool_calls.0.function.arguments Input should be a valid dictionary [type=dict_type, input_value='{"query": "..."}', input_type=str] For further information visit https://errors.pydantic.dev/2.10/v/dict_type (Note: The input_value can be '{}' or a string containing valid JSON like '{"key": "value"}')
Expected Behavior:
The library should successfully parse the response, potentially by: a) Expecting the arguments field to be a string and parsing it into a dictionary internally before validation. b) Handling the string input gracefully within the Pydantic model definition (e.g., using a custom validator or type adapter if Pydantic supports it).
Suggested Fix/Investigation:
The issue likely lies in the Pydantic model definition for ChatResponse / Message / ToolCall or the code responsible for parsing the raw HTTP response data into these models within the library. The parsing logic should account for the arguments field being delivered as a JSON string from the server.
Hey @ehartford will check this out - could you confirm the Ollama version you're running is latest v0.6.3?
I had this error message in v0.6.5 when feeding the arguments to the conversation history as a string vs. dict.
I.e. in the first turn the model responds with a function call, and when I send the result back with the history, I've been sending the original function call arguments as json.dumps(args) vs. just sending the args as a dict.
@krmrn42 that should be expected, it should be a string in the history.
@ehartford I'm seeing parsed responses when running the tools example with llama3.2:3b - could you share a snippet to repro?