No reasoning at all through Github Copilot provider
Description
Validate thinking in VSCode
Attempt to get thinking in opencode
opencode.jsonc
{
"$schema": "https://opencode.ai/config.json",
"theme": "aura",
"autoupdate": true,
"provider": {
"github-copilot": {
"models": {
"[Custom] Opus 4.5 Thinking": {
"id": "claude-opus-4.5",
"reasoning": true,
"options": {
"thinking": {
"type": "enabled",
"budgetTokens": 32000,
},
}
},
}
}
}
}
A conversation example:
opencode export
{
"info": {
"id": "ses_4753afc9affenaSxWVHstiDVAN",
"version": "1.1.1",
"projectID": "e53faf218247b17967230b742668df6addde9727",
"directory": "/home/jcarlens/Development/sesm",
"title": "Checking assistant reasoning",
"time": {
"created": 1767559725925,
"updated": 1767559730475
},
"summary": {
"additions": 0,
"deletions": 0,
"files": 0
}
},
"messages": [
{
"info": {
"id": "msg_b8ac50368001zOrnH6Xk5rnVK3",
"sessionID": "ses_4753afc9affenaSxWVHstiDVAN",
"role": "user",
"time": {
"created": 1767559725933
},
"summary": {
"title": "Asking about reasoning",
"diffs": []
},
"agent": "build",
"model": {
"providerID": "github-copilot",
"modelID": "[Custom] Opus 4.5 Thinking"
}
},
"parts": [
{
"id": "prt_b8ac50368002yxYgBf1OX3yO1R",
"sessionID": "ses_4753afc9affenaSxWVHstiDVAN",
"messageID": "msg_b8ac50368001zOrnH6Xk5rnVK3",
"type": "text",
"text": "Are you reasoning?"
}
]
},
{
"info": {
"id": "msg_b8ac503730018iRxad0NY14TX9",
"sessionID": "ses_4753afc9affenaSxWVHstiDVAN",
"role": "assistant",
"time": {
"created": 1767559725939,
"completed": 1767559730467
},
"parentID": "msg_b8ac50368001zOrnH6Xk5rnVK3",
"modelID": "[Custom] Opus 4.5 Thinking",
"providerID": "github-copilot",
"mode": "build",
"agent": "build",
"path": {
"cwd": "/home/jcarlens/Development/sesm",
"root": "/home/jcarlens/Development/sesm"
},
"cost": 0,
"tokens": {
"input": 327,
"output": 77,
"reasoning": 0,
"cache": {
"read": 10534,
"write": 0
}
},
"finish": "stop"
},
"parts": [
{
"id": "prt_b8ac5101b001lc4JrFFJfRMDvj",
"sessionID": "ses_4753afc9affenaSxWVHstiDVAN",
"messageID": "msg_b8ac503730018iRxad0NY14TX9",
"type": "step-start",
"snapshot": "e0066eb511d26f679f2ba4739f14047f10f435aa"
},
{
"id": "prt_b8ac5101c001ZUyiAP75JVK2pe",
"sessionID": "ses_4753afc9affenaSxWVHstiDVAN",
"messageID": "msg_b8ac503730018iRxad0NY14TX9",
"type": "text",
"text": "No, I'm not using an extended reasoning or \"thinking\" mode. I'm a standard Claude model responding directly to your messages. I process your input and generate responses, but I don't have a separate visible chain-of-thought or reasoning trace that I'm working through before answering.\n\nIs there something specific you'd like help with in your project?",
"time": {
"start": 1767559730442,
"end": 1767559730442
}
},
{
"id": "prt_b8ac5150c001FbC86cMHHQDIoA",
"sessionID": "ses_4753afc9affenaSxWVHstiDVAN",
"messageID": "msg_b8ac503730018iRxad0NY14TX9",
"type": "step-finish",
"reason": "stop",
"snapshot": "e0066eb511d26f679f2ba4739f14047f10f435aa",
"cost": 0,
"tokens": {
"input": 327,
"output": 77,
"reasoning": 0,
"cache": {
"read": 10534,
"write": 0
}
}
}
]
}
]
}
I also tried to put any value in the reasoning, nothing seems to break the API. Maybe the options are ignored?
Thanks for your hardwork!
Plugins
No response
OpenCode version
1.1.1
Steps to reproduce
- Connect to github copilot provider
- Either use built-in Opus 4.5 with any built-in variants, or try to configure a custom model
- Ask anything to see a chain of thougth
Screenshot and/or share link
No response
Operating System
Ubuntu 24.04
Terminal
Ghostty
Note: I did the same kind of exercice with GPT 5.2 (with the reasoning effort settings), same issue.
I tried to run mitmproxy to see VSCode requests body.
Here is a snippet:
{
"messages": [
{
"role": "system",
"content": "You are an expert AI...",
"copilot_cache_control": {
"type": "ephemeral"
}
}
],
"model": "claude-opus-4.5",
"temperature": 0,
"top_p": 1,
"max_tokens": 16000,
"tools": [],
"n": 1,
"stream": true,
"thinking_budget": 15999
}
I removed some useless details.
It seems like it defines a thinking_budget field in the root of the JSON payload the the /completions endpoint. Interesting.
GPT 5.2 through copilot:
{
"model": "gpt-5.2",
"input": [
{
"role": "system",
"content": []
}
],
"stream": true,
"tools": [],
"max_output_tokens": 64000,
"store": false,
"truncation": "disabled",
"reasoning": {
"effort": "high",
"summary": "detailed"
},
"include": [
"reasoning.encrypted_content"
]
We can see the reasoning and include at the root too.
From what I understood, this is because it uses the Responses API. While opencode use the completions one for my test prompts. OpenCode is apparently using Respones API only for codex according to my opencode thread.
I'm seeing the same issue. Would love to be able to configure thinking/reasoning for Copilot models.
This is a critical bug if wanting to use copilot provider as there is a huge difference for models like gpt 5.2 with reasoning on/off.
Also would like to know if this is getting implemented.
same for me
Would be very helpful to have it since Anthropic subscription is not an option anymore.
so is it actually reasoning but we dont see the thinking block or its not reasoning at all?
@hazrid93 no reasoning at all.
Hope this gets fixed
@thdxr hope this get priority
Looking into it
Thank you so much for your hard work @rekram1-node There is a HUGE pool of copilot users as mostly every large companies that are Microsoft customers paid for it, this will be insanely helpful
The issue is for both GPT and Claude
We tried directly through Zen as well, and this was the answer we got. This seems to differ from what https://opencode.ai/docs/models/#variants says since it should have said yes.
I had to enable the high variant manually (different than the docs state).
And here it is again with High via Copilot:
I know others already confirmed the issue, simply pointing out that there seems to be a lot of inconsistencies. I'm not sure if the "reasoning" answer is something that would make sense to only be triggered through the thinking tool or not. Zed (zed.dev) (using Copilot) also has the same issue.
@nesl247 you can tell by how long it takes to respond, if you set it to high/xhigh reasoning and give it something that is complex to solve it will start outputting non-thinking tokens immediately which makes it pretty obvious it's not using any tokens for reasoning.
I tried ask gpt5.2 high from github copilot "are you reasoning?", half a minute later it replies "yes". the issue feels more like reasoning result is not shown in ui to me. but i'm not sure this is true for all models.
+1 - Also experiencing this with claude-opus-4.5 and gpt-5.2-codex via GitHub Copilot. The variants (low/medium/high) show in the UI but don't actually send the correct params to the API.
Yeah we send the correct fields, they just stream reasoning in a different format than other openai compatible providers so the sdk doesnt parse it out. like i said, will fix
Hopefully today :)