Auto-compaction enabled by default, but context_length_exceeded errors still occur in agent workflows
Description
Auto-compaction enabled by default, but context_length_exceeded errors still occur in agent workflows
Description
According to the opencode documentation, auto-compaction is enabled by default:
https://opencode.ai/docs/config/#compaction
Example default configuration:
{
"$schema": "https://opencode.ai/config.json",
"compaction": {
"auto": true,
"prune": true
}
}
However, I am still encountering frequent context_length_exceeded errors during agent execution, which suggests that auto-compaction may not be triggering as expected.
Error message
{
"type": "error",
"sequence_number": 2,
"error": {
"type": "invalid_request_error",
"code": "context_length_exceeded",
"message": "Your input exceeds the context window of this model. Please adjust your input and try again.",
"param": "input"
}
}
Expected behavior
With compaction.auto = true and compaction.prune = true, I would expect:
- Context compaction to occur automatically before the model context window is exceeded.
- Agent and sub-agent workflows to be protected from
context_length_exceedederrors in long-running or multi-step executions.
Environment
- opencode version: 1.1.14
- Model: GPT-5.2 (OpenAI)
-
Authentication: OpenAI OAuth via
/connect - Usage pattern: Build agent invoking multiple custom sub-agents
Questions / Clarification
- Is auto-compaction currently supported for agent and sub-agent contexts, or only for top-level interactions?
- Are there known limitations when using multiple agents that could cause compaction not to trigger?
- Is additional configuration required to ensure compaction works correctly in agent-based workflows?
Any guidance or confirmation of expected behavior would be greatly appreciated. Thanks for the great project!
Plugins
No response
OpenCode version
1.1.4
Steps to reproduce
- The error appears to occur inside an agent execution, not during a single prompt.
- I am using the build agent, which in turn invokes multiple custom sub-agents.
- Over time, the accumulated context across agent/sub-agent interactions seems to exceed the model’s context window.
- Auto-compaction does not appear to reduce or prune context early enough to prevent this error.
Screenshot and/or share link
No response
Operating System
Ubuntu 24.04
Terminal
Ghostty
This issue might be a duplicate of existing issues. Please check:
- #6286: Compaction Failing to Run on Time
- #6068: [BUG]: Auto compaction not triggering reliably with Anthropic models
- #6535: Auto-compaction causes the subagent to lose its original context, possibly the system prompt as well
- #3031: Model in BUILD mode does not have enough context to continue after compaction
- #3153: Feature: Subagents should support auto compaction
Feel free to ignore if none of these address your specific case.
I've also been having this occur pretty frequently (2x a day?) as I've been testing Opencode more extensively the last few days. I'm using Codex 5.2 XHigh, and this typically happens on long running tasks.
Not sure if this is at all helpful, but Codex things the huge git diff output was the culprit:
User: So what were you doing when we got this error and I had to compact your context?{"type":"error","sequence_number":2,"error":{"type":"invalid_request_error","code":"context_length_exceeded","message":"Your input exceeds the context window of this model. Please adjust your input and try again.","param":"input"}}
Opencode: Right when the context overflow happened, I was in the review step after you said “Continue.” I ran git status -sb and git diff to check uncommitted changes, and the diff for resources/js/features/wallet/providers/web3-provider.tsx was huge and got truncated. I was about to use the explore agent to inspect that file and then proceed with formatting/tests—no new edits were made at that moment.
User: So what do you think caused you to exceed the context length?
Opencode: Most likely the full git diff output (especially the large web3-provider.tsx diff) plus the very long environment/AGENTS instructions pushed the input over the model’s context window. The tool returned a truncated diff, which is a common sign that the response was too large and likely triggered the overflow.
Yeah there is a bug specific to some openai models, should be fixed v soon