opencode icon indicating copy to clipboard operation
opencode copied to clipboard

Default Codex context length not matching Pro plan

Open sam-ulrich1 opened this issue 4 weeks ago • 13 comments

Description

In the code, the context length for GPT 5.2 codex shows 400,000 tokens, which is technically correct for the API, but on the pro plan for ChatGPT, I believe you only get like 250 to 260, something like that. So, if the average user signs up with ChatGPT and uses it, they are likely to hit a context error, which forces compaction. So, it would make sense to maybe detect if it's a ChatGPT versus an API integration, and then automatically configure the defaults, maybe?

Plugins

No response

OpenCode version

1.1.111

Steps to reproduce

No response

Screenshot and/or share link

No response

Operating System

No response

Terminal

No response

sam-ulrich1 avatar Jan 10 '26 20:01 sam-ulrich1

This issue might be a duplicate of existing issues. Please check:

  • #6071: [BUG]: GPT 5.2 context_length_exceeded / Your input exceeds the context window of this model - reports the same context limit mismatch issue with GPT 5.2 (showing 270k tokens when API limit is 400k)

Feel free to ignore if this doesn't address your specific case.

github-actions[bot] avatar Jan 10 '26 20:01 github-actions[bot]

To be clear, you have to compact whether you run out of context space abruptly or not. It's just that if you have plugins like OhMyOpencode, then compaction could happen earlier at a more optimal point than the last second when you fully run out of tokens. So having a correct representation of the context by default would be nice for a lot of users.

PS love the hook system, custom hooks on compaction are a god send

sam-ulrich1 avatar Jan 10 '26 21:01 sam-ulrich1

ah okay ill look into it

rekram1-node avatar Jan 10 '26 21:01 rekram1-node

@rekram1-node Is there some way that I can modify my open code JSON to fix this without having to redefine all of the models and variants that come default with the new official codex integration? Just as a temporary workaround.

sam-ulrich1 avatar Jan 10 '26 22:01 sam-ulrich1

ig show me ur config but u can do partial model overrides u dont need complete redefinitions

rekram1-node avatar Jan 11 '26 03:01 rekram1-node

Are there docs on partial model overrides?

PSU3D0 avatar Jan 11 '26 19:01 PSU3D0

Updated config with and it worked. did not know that it was config merging

  "provider": {
    "openai": {
      "models": {
        "gpt-5.2": {
          "limit": { "context": 272000, "output": 128000 }
        },
        "gpt-5.2-codex": {
          "limit": { "context": 272000, "output": 128000 }
        },
        "gpt-5.1-codex-max": {
          "limit": { "context": 272000, "output": 128000 }
        },
        "gpt-5.1-codex-mini": {
          "limit": { "context": 272000, "output": 128000 }
        }
      }
    }
  }

sam-ulrich1 avatar Jan 11 '26 19:01 sam-ulrich1

nvm it did not merge. it overwrote the defaults

sam-ulrich1 avatar Jan 11 '26 19:01 sam-ulrich1

I have the excact same issue, always when used just short of 270K tokens, which match the available token count when using the pro / plus sub in codex.

FBakkensen avatar Jan 12 '26 08:01 FBakkensen

openai and codex should probably become separate providers.

kostrse avatar Jan 12 '26 15:01 kostrse

Guys ill try to fix this real soon, does anyone have a link to where the limits are defined? If not ill try to find it

rekram1-node avatar Jan 12 '26 18:01 rekram1-node

Guys ill try to fix this real soon, does anyone have a link to where the limits are defined? If not ill try to find it

This should help: https://github.com/openai/codex/blob/main/codex-rs/core/src/models_manager/model_info.rs

NERO2k avatar Jan 12 '26 18:01 NERO2k

i use github copilot subscription and my context window is always 100k doesnt matter which model i use (even opus/gemini pro)

mykola-dev avatar Jan 13 '26 11:01 mykola-dev

GPT 5.2 via OpenAI API should have 400K, right?

kostrse avatar Jan 14 '26 05:01 kostrse

i use github copilot subscription and my context window is always 100k doesnt matter which model i use (even opus/gemini pro)

Yes, all models has an artificial low context window using github copilot, this is the same whether you use github copilot, or use their subscription in opencode or any other tool

FBakkensen avatar Jan 14 '26 05:01 FBakkensen