copilot provider models is so much slower lately
Question
haven't really used copilot with other than opencode (not using copilot chat) but lately i feels like copilot models is really slow in my opencode
idk where to ask about this since this is mostlikely copilot problem, but like to know is it happening to all of you guys?
it tooks 3-5 minutes with gemini 3 flash, sonnet 4.5 and gpt 5.2 to answer simple question
This issue might be a duplicate of existing issues. Please check:
- #496: Absurd amount of cpu and memory usage when doing pah completion
- #4133: Use a cheap model to name sessions (addresses expensive model slowdown)
- #4385: Poor performance with local models
Feel free to ignore if none of these address your specific case.
I can confirm this.
can attest to this. it's happening on my opencode cli too
Like literally wont even start working until 10 minutes have past. No indication of throttling or anything. 5.2-Codex is just sitting there doing nothing, then it runs one command, then waits another 10 minutes. I'm actually having issues across multiple providers. OpenAI too. OpenCode seems to just not be processing requests today, but I have a hard time believing its anything to do with OpenCode. Maybe just MS infra?