opencode
opencode copied to clipboard
Response terminates prematurely when using Gemini 3 via LiteLLM
Description
When using gemini-3-flash-preview through a LiteLLM proxy, OpenCode stops processing the response as soon as the model triggers a tool call. The model provides a reasoning block and a tool call, but OpenCode does not seem to execute the requested tool (e.g., read) and the interaction hangs or terminates without output.
The model response from LiteLLM looks like this (shortened for clarity):
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_023b94f976e14e3a8711dd9c9864",
"type": "function",
"function": {
"name": "read",
"arguments": "{\"filePath\": \"/path/to/file/test.txt\"}"
},
"provider_specific_fields": {
"thought_signature": "..."
}
}
],
"reasoning_content": "**Synthesizing Knowledge Bases**\n\nI've been analyzing..."
},
"finish_reason": "stop"
}
]
}
The issue does NOT occur when connecting Gemini 3 directly to OpenCode (without LiteLLM)
OpenCode version
1.0.203
LiteLLM Version
1.80.11
Steps to reproduce
- Set up LiteLLM with a Gemini 3 model - for example here is my litellm config:
model_list:
- model_name: gemini/gemini-3-flash-preview
litellm_params:
model: gemini/gemini-3-flash-preview
api_key: xxx
drop_params: true
- Configure OpenCode to use the LiteLLM endpoint:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"litellm": {
"npm": "@ai-sdk/openai-compatible",
"name": "litellm",
"options": {
"baseURL": "https://localhost:4000/v1",
"apiKey": "sk-xxx"
},
"models": {
"gemini/gemini-3-flash-preview": {
"name": "gemini/gemini-3-flash-preview",
"options": {
"reasoningEffort": "high"
}
}
}
}
}
}
- Ask a question that requires reading a file or using a tool.
Observe that the process stops and no file is read, despite the model requesting it in the JSON.
Screenshot and/or share link
No response
Operating System
Windows 11 with WSL 2
Terminal
No response