GLM 4.7 on Zai coding plan puts tool calls inside the thinking/reasoning tag.
Description
This happens to me about 15 times a day, and it forces me to start a new session because the session with that call in the thinking tag keeps happening and more errors and failures to call the right tools starts to happen.
Restarting in a new session it will go an hour or so before doing it again and needing a restart.
OpenCode version
1.0.223
Steps to reproduce
- Choose Zai coding plan and give it your API Key.
- Use GLM 4.7 for awhile.
- It usually happens after context goes above 100k.
Screenshot and/or share link
Operating System
macOS 26.1
Terminal
Kitty
p.s. OpenCode is awesome.
This issue might be a duplicate of existing issues. Please check:
- #6486: Error "reasoning part reasoning-0 not found" from tool calls using GLM 4.5 Air
Both issues involve GLM models having problems with reasoning/thinking tags and tool calls, though with different error messages. Checking #6486 may provide additional context for understanding and fixing the issue.
Feel free to ignore if this doesn't address your specific case.
This is actually handy. I was wondering when we would see such thing. Basically during the reasoning phase, the model could use a tool call to enrich its reasoning.
This is actually handy. I was wondering when we would see such thing. Basically during the reasoning phase, the model could use a tool call to enrich its reasoning.
Definitely. I am not saying it's a horrible thing that it's calling a tool while reasoning, but isn't the best that the tool call doesn't actually happen and the session gets into a weird state for me.
how consistent is this?
@rekram1-node haha, Interleaved thinking We support interleaved thinking by default (supported since GLM-4.5), allowing GLM to think between tool calls and after receiving tool results. This enables more complex, step-by-step reasoning: interpreting each tool output before deciding what to do next, chaining multiple tool calls with reasoning steps, and making finer-grained decisions based on intermediate results. https://docs.z.ai/guides/capabilities/thinking-mode
@msmirnyagin we do send reasoning content back under the reasoning_content field, hmmm
Ill look
Only thing I see that may be an issue is “clear_thinking”: false but for the coding plan it says enabled by default hm
@greggh anyway u could share a session this happened?
opencode export > session.json
Are you using any plugins that could be interferring?
I also encountered the same issue. It doesn't happen all the time, and I'm not sure what triggers it.
I have also encountered this situation, especially when the context is long (>128k). I guess this is because glm4.7 was further trained based on glm4.5, and glm4.5 only supports a 128k context. This also explains why glm4.7’s accuracy declines beyond 100k, starts speaking nonsense, and mixes Chinese and English.
This is purely speculation without any evidence. Basically, when the context reaches around 100k, I manually execute /compact to avoid this issue.
@greggh anyway u could share a session this happened?
opencode export > session.jsonAre you using any plugins that could be interferring?
I can make up a fake session later, my real work that this is happening on I can't share (I don't own, contract, ...). I'll grab some other code and get it to the point it fails.
As for consistency, I get it more than 10 times a day.
Ahhh GLM-4.7 on Zai cofing plan | +2
Happening on my end also
anyway u could share a session this happened?
opencode export > session.json
@rekram1-node I was having this issue and I created a draft PR that right now fixes it like 60% of the time https://github.com/anomalyco/opencode/pull/6883 (not 100% cuz I stopped when I saw a similar issue was closed). happy to move it forward if there is interest and alignment in the fix this way :)
So as per @rekram1-node , the fix is not correct
anyway u could share a session this happened?
opencode export > session.json
Sorry for the delay, got busy. Here is a log of a session I did yesterday just to attempt to get it to break the same way it is for some of us. I exported the session right after it had the error, so its in the last message. I had it just playing around with my GitHub skill.
How is this going for others? I am still having the issue, but not as often. Maybe a 20% decrease in the amount of these errors since I reported it.
anyway u could share a session this happened?
opencode export > session.json
Do you need more session examples of this bug occurring, @rekram1-node ?
It has been happening to me, and I haven't yet figured out how to get out of it.
I think the issue in general with GLM 4.7 is that around 55-60%, e.g. >100K it degenerates, becomes confused and renders useless. It does not limit to tool use, it forgets the whole project context at the times, instructions, etc. and so far this behavior has been consistent for all my sessions. Granted, I only have had a few session with this model so far but this certainly does not look good.
@rickross can u send the actual export using the command I sent? You just sent a markdown transcript, I need json exports using the command I attached
@rickross can u send the actual export using the command I sent? You just sent a markdown transcript, I need json exports using the command I attached
I'm sorry @rekram1-node but that session is no longer available. If I see it again I will capture that way forr you. Thanks for looking into this.