Default Codex context length not matching Pro plan
Description
In the code, the context length for GPT 5.2 codex shows 400,000 tokens, which is technically correct for the API, but on the pro plan for ChatGPT, I believe you only get like 250 to 260, something like that. So, if the average user signs up with ChatGPT and uses it, they are likely to hit a context error, which forces compaction. So, it would make sense to maybe detect if it's a ChatGPT versus an API integration, and then automatically configure the defaults, maybe?
Plugins
No response
OpenCode version
1.1.111
Steps to reproduce
No response
Screenshot and/or share link
No response
Operating System
No response
Terminal
No response
This issue might be a duplicate of existing issues. Please check:
- #6071: [BUG]: GPT 5.2 context_length_exceeded / Your input exceeds the context window of this model - reports the same context limit mismatch issue with GPT 5.2 (showing 270k tokens when API limit is 400k)
Feel free to ignore if this doesn't address your specific case.
To be clear, you have to compact whether you run out of context space abruptly or not. It's just that if you have plugins like OhMyOpencode, then compaction could happen earlier at a more optimal point than the last second when you fully run out of tokens. So having a correct representation of the context by default would be nice for a lot of users.
PS love the hook system, custom hooks on compaction are a god send
ah okay ill look into it
@rekram1-node Is there some way that I can modify my open code JSON to fix this without having to redefine all of the models and variants that come default with the new official codex integration? Just as a temporary workaround.
ig show me ur config but u can do partial model overrides u dont need complete redefinitions
Are there docs on partial model overrides?
Updated config with and it worked. did not know that it was config merging
"provider": {
"openai": {
"models": {
"gpt-5.2": {
"limit": { "context": 272000, "output": 128000 }
},
"gpt-5.2-codex": {
"limit": { "context": 272000, "output": 128000 }
},
"gpt-5.1-codex-max": {
"limit": { "context": 272000, "output": 128000 }
},
"gpt-5.1-codex-mini": {
"limit": { "context": 272000, "output": 128000 }
}
}
}
}
nvm it did not merge. it overwrote the defaults
I have the excact same issue, always when used just short of 270K tokens, which match the available token count when using the pro / plus sub in codex.
openai and codex should probably become separate providers.
Guys ill try to fix this real soon, does anyone have a link to where the limits are defined? If not ill try to find it
Guys ill try to fix this real soon, does anyone have a link to where the limits are defined? If not ill try to find it
This should help: https://github.com/openai/codex/blob/main/codex-rs/core/src/models_manager/model_info.rs
i use github copilot subscription and my context window is always 100k doesnt matter which model i use (even opus/gemini pro)
GPT 5.2 via OpenAI API should have 400K, right?
i use github copilot subscription and my context window is always 100k doesnt matter which model i use (even opus/gemini pro)
Yes, all models has an artificial low context window using github copilot, this is the same whether you use github copilot, or use their subscription in opencode or any other tool