opencode icon indicating copy to clipboard operation
opencode copied to clipboard

GLM 4.7 on Zai coding plan puts tool calls inside the thinking/reasoning tag.

Open greggh opened this issue 3 weeks ago • 16 comments

Description

This happens to me about 15 times a day, and it forces me to start a new session because the session with that call in the thinking tag keeps happening and more errors and failures to call the right tools starts to happen.

Restarting in a new session it will go an hour or so before doing it again and needing a restart.

OpenCode version

1.0.223

Steps to reproduce

  1. Choose Zai coding plan and give it your API Key.
  2. Use GLM 4.7 for awhile.
  3. It usually happens after context goes above 100k.

Screenshot and/or share link

Image

Operating System

macOS 26.1

Terminal

Kitty

p.s. OpenCode is awesome.

greggh avatar Jan 02 '26 22:01 greggh

This issue might be a duplicate of existing issues. Please check:

  • #6486: Error "reasoning part reasoning-0 not found" from tool calls using GLM 4.5 Air

Both issues involve GLM models having problems with reasoning/thinking tags and tool calls, though with different error messages. Checking #6486 may provide additional context for understanding and fixing the issue.

Feel free to ignore if this doesn't address your specific case.

github-actions[bot] avatar Jan 02 '26 22:01 github-actions[bot]

This is actually handy. I was wondering when we would see such thing. Basically during the reasoning phase, the model could use a tool call to enrich its reasoning.

arsham avatar Jan 02 '26 22:01 arsham

This is actually handy. I was wondering when we would see such thing. Basically during the reasoning phase, the model could use a tool call to enrich its reasoning.

Definitely. I am not saying it's a horrible thing that it's calling a tool while reasoning, but isn't the best that the tool call doesn't actually happen and the session gets into a weird state for me.

greggh avatar Jan 02 '26 23:01 greggh

how consistent is this?

rekram1-node avatar Jan 02 '26 23:01 rekram1-node

@rekram1-node haha, Interleaved thinking We support interleaved thinking by default (supported since GLM-4.5), allowing GLM to think between tool calls and after receiving tool results. This enables more complex, step-by-step reasoning: interpreting each tool output before deciding what to do next, chaining multiple tool calls with reasoning steps, and making finer-grained decisions based on intermediate results. https://docs.z.ai/guides/capabilities/thinking-mode

msmirnyagin avatar Jan 03 '26 04:01 msmirnyagin

@msmirnyagin we do send reasoning content back under the reasoning_content field, hmmm

Ill look

Only thing I see that may be an issue is “clear_thinking”: false but for the coding plan it says enabled by default hm

rekram1-node avatar Jan 03 '26 04:01 rekram1-node

@greggh anyway u could share a session this happened?

opencode export > session.json

Are you using any plugins that could be interferring?

rekram1-node avatar Jan 03 '26 04:01 rekram1-node

I also encountered the same issue. It doesn't happen all the time, and I'm not sure what triggers it.

FurryWolfX avatar Jan 03 '26 09:01 FurryWolfX

I have also encountered this situation, especially when the context is long (>128k). I guess this is because glm4.7 was further trained based on glm4.5, and glm4.5 only supports a 128k context. This also explains why glm4.7’s accuracy declines beyond 100k, starts speaking nonsense, and mixes Chinese and English.

This is purely speculation without any evidence. Basically, when the context reaches around 100k, I manually execute /compact to avoid this issue.

heimoshuiyu avatar Jan 03 '26 13:01 heimoshuiyu

@greggh anyway u could share a session this happened?

opencode export > session.json

Are you using any plugins that could be interferring?

I can make up a fake session later, my real work that this is happening on I can't share (I don't own, contract, ...). I'll grab some other code and get it to the point it fails.

As for consistency, I get it more than 10 times a day.

greggh avatar Jan 03 '26 15:01 greggh

Ahhh GLM-4.7 on Zai cofing plan | +2

Image

ahmedrowaihi avatar Jan 03 '26 18:01 ahmedrowaihi

Happening on my end also

gsxdsm avatar Jan 03 '26 22:01 gsxdsm

anyway u could share a session this happened?

opencode export > session.json

rekram1-node avatar Jan 03 '26 23:01 rekram1-node

@rekram1-node I was having this issue and I created a draft PR that right now fixes it like 60% of the time https://github.com/anomalyco/opencode/pull/6883 (not 100% cuz I stopped when I saw a similar issue was closed). happy to move it forward if there is interest and alignment in the fix this way :)

ramarivera avatar Jan 05 '26 00:01 ramarivera

So as per @rekram1-node , the fix is not correct

ramarivera avatar Jan 05 '26 01:01 ramarivera

anyway u could share a session this happened?

opencode export > session.json

Sorry for the delay, got busy. Here is a log of a session I did yesterday just to attempt to get it to break the same way it is for some of us. I exported the session right after it had the error, so its in the last message. I had it just playing around with my GitHub skill.

ses_46b95350fffeq8T95l1hKVHwsM.md

greggh avatar Jan 07 '26 20:01 greggh

How is this going for others? I am still having the issue, but not as often. Maybe a 20% decrease in the amount of these errors since I reported it.

greggh avatar Jan 10 '26 17:01 greggh

anyway u could share a session this happened?

opencode export > session.json

Do you need more session examples of this bug occurring, @rekram1-node ?

It has been happening to me, and I haven't yet figured out how to get out of it.

GLM-thinking-tools-errors.md

rickross avatar Jan 10 '26 18:01 rickross

I think the issue in general with GLM 4.7 is that around 55-60%, e.g. >100K it degenerates, becomes confused and renders useless. It does not limit to tool use, it forgets the whole project context at the times, instructions, etc. and so far this behavior has been consistent for all my sessions. Granted, I only have had a few session with this model so far but this certainly does not look good.

autistarum avatar Jan 11 '26 02:01 autistarum

@rickross can u send the actual export using the command I sent? You just sent a markdown transcript, I need json exports using the command I attached

rekram1-node avatar Jan 11 '26 03:01 rekram1-node

@rickross can u send the actual export using the command I sent? You just sent a markdown transcript, I need json exports using the command I attached

I'm sorry @rekram1-node but that session is no longer available. If I see it again I will capture that way forr you. Thanks for looking into this.

rickross avatar Jan 11 '26 03:01 rickross