Consecutive "role=user" contents in history
What happened?
The docs say the role for each Content object should alternate between user and model. But there are at least two situations where we don't do that.
- when adding the IDE context.
- When canceling function call requests.
What did you expect to happen?
We shouldn't do that.
Client information
Client Information
Run gemini to enter the interactive CLI, then run the /about command.
> /about
# paste output here
Login information
No response
Anything else we need to know?
No response
This is a bug report where the application is creating a conversation history with consecutive user roles, which is not the expected behavior. This is a core logic issue. Adding status/need-information as the user has not provided the version of the CLI they are using.
Additional Scenario: Headless Mode with Prompt Arguments
I can confirm this bug also occurs in headless mode when using prompt arguments (both deprecated --prompt flag and positional arguments). This affects all headless usage, not just IDE context or function cancellation.
Environment
- CLI Version: 0.11.3
- Auth Method: OAuth (Google)
- OS: Linux (Docker container)
-
Command:
gemini 'message' --output-format stream-json
Reproduction
Even in a completely fresh directory with no session state, Gemini CLI duplicates the user message:
# Test in empty directory
mkdir /tmp/fresh-test && cd /tmp/fresh-test
gemini 'hello' --output-format stream-json 2>/tmp/error.json
Result
The API request context shows three user messages (should be one):
{
"context": [
{"role": "user", "parts": [{"text": "System initialization message"}]},
{"role": "user", "parts": [{"text": "hello"}]},
{"role": "user", "parts": [{"text": "hello"}]} // Duplicate!
]
}
Error
INVALID_ARGUMENT: Please ensure that multiturn requests alternate between user and model.
Impact
This makes headless mode completely unusable for conversational applications. Every single prompt fails with INVALID_ARGUMENT due to consecutive user messages.
Notes
- Tested with both
--prompt 'message'(deprecated) and positional'message'syntax - both duplicate - Occurs even with empty session files (0 messages)
- Occurs in brand new directories with no
.gemini/state - This is a third scenario beyond the two already documented (IDE context, function cancellation)
Happy to provide more details or test patches if needed.
Hmm. That's truly weird because in my case the code does not throw any kind of error. Also, if noninteractive mode was really broken we'd get much more than one bug report.
What model are you using when this happens?
Hi, thanks for the quick response!
After further investigation, I can confirm the issue is specific to the gemini-2.0-flash-exp model in noninteractive mode.
Environment details:
- Model: gemini-2.0-flash-exp
- Command: gemini --prompt "hello" --output-format stream-json
- Auth: OAuth (personal Google account)
- Environment: Docker container (Linux)
What I found: When using gemini-2.0-flash-exp, I consistently get duplicate responses in noninteractive mode (--prompt flag). However, when I switched to gemini-2.5-flash, the issue completely disappeared - clean responses with no duplicates.
Reproduction: With gemini-2.0-flash-exp - shows duplicates gemini --prompt "hello" --model gemini-2.0-flash-exp
With gemini-2.5-flash - works perfectly gemini --prompt "hello" --model gemini-2.5-flash
This would explain why you haven't seen many reports - the issue is specific to the experimental 2.0 model, not a general noninteractive mode problem.
Thanks for pointing me in the right direction!
Hello! As part of our effort to keep our backlog manageable and focus on the most active issues, we are tidying up older reports.
It looks like this issue hasn't been active for a while, so we are closing it for now. However, if you are still experiencing this bug on the latest stable build, please feel free to comment on this issue or create a new one with updated details.
Thank you for your contribution!
Hello! As part of our effort to keep our backlog manageable and focus on the most active issues, we are tidying up older reports.
It looks like this issue hasn't been active for a while, so we are closing it for now. However, if you are still experiencing this bug on the latest stable build, please feel free to comment on this issue or create a new one with updated details.
Thank you for your contribution!