False "rate_limited" error on gpt-5.1-codex in CLI v0.0.358
Describe the bug
When attempting to use the newly released gpt-5.1-codex model via the Copilot CLI, the request fails immediately with a rate_limited error.
This appears to be a false error for the following reasons:
- My Copilot Pro account has premium requests remaining.
- The rate_limited error happens only for the gpt-5.1-codex model.
- All other models work correctly from the CLI.
- The gpt-5.1-codex model has already been successfully enabled in my account's Copilot settings (via the web UI).
Affected version
0.0.358 Commit: f5a8b1e76
Steps to reproduce the behavior
-
Ensure you have a Copilot Pro account with premium requests available.
-
Go to GitHub settings and enable the gpt-5.1-codex model.
-
From the terminal, run any Copilot CLI command, specifying the model:
copilot --model gpt-5.1-codex "read the directory" -
Observe the immediate error.
Expected behavior
The Copilot CLI should connect to the gpt-5.1-codex model and execute the prompt, consuming premium requests as expected.
Actual Behavior
The CLI immediately fails and prints the rate_limited error message:
✗ Model call failed: {"message":"Rate limit exceeded. Please review our [Terms of Service](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service).","code":"rate_limited"}
Additional context
Environment & Context
- OS: Windows 11 Pro (64-bit operating system, x64-based processor)
- Terminal: Windows Terminal
- Shell: Powershell 7.5.4
- Hardware: AMD Ryzen 9 9950X3D, 64.0 GB RAM
- Subscription: Copilot Pro
- Troubleshooting Steps Taken:
- Verified all other models work.
- Verified gpt-5.1-codex is enabled in settings.
- Logged out and logged back in.
- Performed a clean uninstall and reinstall of the 0.0.358 CLI.
- Related Issues: This seems related to or a continuation of Issue #558, which reported access problems for this model on v0.0.357. This new rate_limited error might be the new symptom in v0.0.358.
🔍 Additional Investigation - Linux Environment
I can confirm this issue on Linux Mint / Ubuntu 24.04 as well:
Environment
- OS: Linux Mint Vertice (Kernel 6.14.0-35-generic)
- Node.js: v22.20.0
- npm: 10.9.3
- Copilot CLI: 0.0.358 (Commit: f5a8b1e76)
- Subscription: Copilot Pro
Test Results
✅ gpt-5 works:
$ copilot --model gpt-5 -p "say hello" --allow-all-tools
Hello!
Total usage est: 1 Premium request
❌ gpt-5.1-codex fails immediately:
$ copilot --model gpt-5.1-codex -p "test" --allow-all-tools
Model call failed: {"message":"Rate limit exceeded. Please review our [Terms of Service](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service).","code":"rate_limited"} (Request ID: 8F3E:31FAD5:2A47F1:3B12F6:691A46EE)
Key Observations
- Cross-platform issue: Now confirmed on Windows 11 (original report) and Linux
- Instant failure: Error happens before any actual API call (no delay)
- Other models working: gpt-5, gpt-5.1, and gpt-5.1-codex-mini all work fine
- Premium requests available: Account has remaining quota
Hypothesis
This appears to be a model validation or permission check issue in the CLI, not an actual rate limit from the API:
- The error is immediate (no network delay)
- Request ID is generated, suggesting it reaches GitHub's API
- Possibly related to the fix in #558 - the model was re-enabled but with incorrect permission flags?
Would the maintainers be able to check if gpt-5.1-codex has different rate limit rules or requires additional account permissions compared to other gpt-5.1 variants?
Copilot 0.0.360 still has this bug
Yes, the 0.0.360 still has the bug (Cachy OS - Linux)
Fixed for me in 0.0.362 (Windows 10)
Ouï y turned up hge &:@