Gemini models in agent mode giving error 400 invalid argument
Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the Continue Discord for questions
- [x] I'm not able to find an open issue that reports the same bug
- [x] I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Windows 11
- Continue version: 1.1.45 - 1.1.47(.vsix from github main)
- IDE version:1.100.2
- Model:Gemini-flash-2.0, Gemini-flash-2.5-05-20, Gemini-Pro-2.5-preview. issue persists across aistudio/vertex provider & openrouter provider
- config:
# A name and version for your configuration
name: shamanic-config
version: 0.0.1
schema: v1
openrouter_defaults: &openrouter_defaults
provider: openrouter
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
rules:
- You are an expert software developer. You give helpful and concise responses.
models:
- name: OpenRouter LLaMA 70 8B
provider: openrouter
model: meta-llama/llama-3-70b-instruct
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply] # 'models' from JSON get the chat/edit/apply roles by default
- name: Claude 3.5 Sonnet
provider: openrouter
model: anthropic/claude-3.5-sonnet-latest
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Anthropic Claude 3.5 Sonnet Beta
provider: openrouter
model: anthropic/claude-3.5-sonnet:beta
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: DeepSeek Chat
provider: openrouter
model: deepseek/deepseek-chat
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: DeepSeek R1 0528
provider: openrouter
model: deepseek/deepseek-r1-0528
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Qwen 2.5 Coder 32B Instruct
provider: openrouter
model: qwen/qwen-2.5-coder-32b-instruct
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: MistralAI Codestral 2501
provider: openrouter
model: mistralai/codestral-2501
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Gemini 2.5 Pro Preview 05-06
provider: openrouter
model: google/gemini-2.5-pro-preview-05-06
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Gemini 2.5 Pro Preview
provider: openrouter
model: google/gemini-2.5-pro-preview
apiKey: ${{ secrets.OPENROUTER_API_KEY }}
roles: [chat, edit, apply]
- name: Gemini 2.0 Flash
provider: gemini
model: gemini-2.0-flash
apiKey: ${{ secrets.GEMINI_API_KEY }}
contextLength: 1000000
roles: [chat, edit, apply]
- name: Gemini 2.5 Flash 05-20
provider: gemini
model: gemini-2.5-flash-05-20
apiKey: ${{ secrets.GEMINI_API_KEY }}
contextLength: 1000000
roles: [chat, edit, apply]
# Your 'tabAutocompleteModel' is now here with the 'autocomplete' role
- name: Groq — Qwen-qwq-32b (Autocomplete-Testing)
apiBase: https://api.groq.com/openai/v1/
apiVersion: "1"
provider: groq
model: qwen/qwen3-32b
apiKey: ${{ secrets.GROQ_API_KEY }}
# Your 'embeddingsProvider' is now here with the 'embed' role
- name: Ollama Embeddings
provider: ollama
model: mxbai-embed-large:latest
apiBase: http://localhost:11434/v1
roles: [embed]
prompts:
- name: test
description: Write unit tests for highlighted code
prompt: |
{{{ input }}}
Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
mcpServers:
- name: context7
command: context7-mcp
args: []
- name: n8n-test
command: mcp-remote
args:
- http://localhost:5678/mcp-test/97624e46-aeee-4886-8682-209667591bc2/sse
- name: MCP_DOCKER
command: docker
args:
- run
- -l
- mcp.client=continue
- --rm
- -i
- alpine/socat
- STDIO
- TCP:host.docker.internal:8811
OR link to assistant in Continue hub:
Description
The models work fine in chat mode, but attempting to call a gemini model from agent mode gives this error when called from aistudio/vertex:
"[{\n \"error\": {\n \"code\": 400,\n \"message\": \"* GenerateContentRequest.tools[0].function_declarations[10].parameters.required[2]: property is not defined\\n\",\n \"status\": \"INVALID_ARGUMENT\"\n }\n}\n]"
& this error when called via openrouter:
400 Provider returned error
the issue persists across latest build & previously working builds. the error first appeared after the global outage on 12th June so I believe it may be related to some change on googles end
have not had the chance to test this on another system so there is a chance this is local to my machine
To reproduce
https://github.com/user-attachments/assets/7a2ebdea-5d38-4d60-8935-8e5181bc9210
- Select a google gemini model from one of the listed providers
- select agent mode & send message
- observe error message
Log output
[Extension Host] Error: 400 Provider returned error
at Function.generate (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:109455:18)
at OpenAI.makeStatusError (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:110366:25)
at OpenAI.makeRequest (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:110410:29)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at OpenAIApi.chatCompletionStream (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:114472:26)
at OpenRouter.streamChat (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:140909:34)
at llmStreamChat (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:650110:17)
at ed.handleMessage [as value] (c:\Users\Lucas\.vscode\extensions\continue.continue-1.1.47\out\extension.js:668163:29)
log.ts:460 ERR [Extension Host] Error handling webview message: {
"msg": {
"messageId": "8d07eb5c-929f-436d-881f-655e853b0faa",
"messageType": "llm/streamChat",
"data": {
"completionOptions": {
"tools": [
thanks for sharing this @ShamanicArts. Same thing happens if it's the first message in the session (in agent mode)?
can confirm that the same thing happens when its the first message in a session (in agent mode).
also of note this does not impact edit mode, which still works as expected with all gemini models
as mentioned in discord, but mentioning here for posterity
interesting behaviour . I reverted back one by one & it wasnt working right up until 1.1.41 , where it started working again. after it started workign in 1.4.1 then it started working again right up until 1.1.45 & stops working in 1.1.47. the issues are still happenign in my VSIX from 1.1.45 ( which was created when I was creating my force autocomplete feature ) & on latest build
Я оставлю это без комментариев)))))))))))))))))))
Experiencing the same behavior, works fine on chat mode but not in agent.
@ShamanicArts - I just tried with 1.1.47 and got the following error output:
"[{\n \"error\": {\n \"code\": 400,\n \"message\": \"Unable to submit request because required fields ['alwaysApply'] are not defined in the schema properties.\",\n \"status\": \"INVALID_ARGUMENT\"\n }\n}\n]"
I believe this has been fixed on latest pre-release. Could folks try version v1.1.57 on VS Code?
@fred-maussion , mind sharing what version you're running?
@Patrick-Erichsen
- OS: MacOS Sequoia 15.5
- Continue version: v1.0.15 from marketplace
- IDE version: vscode : 1.101.2
- Model:Gemini-flash-2.0, Gemini-flash-2.5-05-20
Just tried the v1.1.57 and working like a charm ;-)
Looks fixed in 1.1.62.
Tested & seems to be fixed!
Thanks for the confirmation all!