Error "reasoning part reasoning-0 not found" from tool calls using GLM 4.5 Air
Description
I just updated OpenCode from v1.0.163 to v1.0.218 and now tool calls using GLM 4.5 Air result in an error "reasoning part reasoning-0 not found". Prior to updating OpenCode the model had no issues calling tools. It appears that the tool calls finish and the model is able to see the results of the tool call, but is not able to respond. If prompted to continue, it was able to respond as if no error occurred for a while until eventually every prompt to continue resulted in the error.
OpenCode version
1.0.218
Steps to reproduce
- Load GLM 4.5 Air (I'm using LMStudio locally)
- Have the model attempt a tool call
Screenshot and/or share link
Operating System
Windows 10
Terminal
Powershell 7
This issue might be a duplicate of existing issues. Please check:
- #6039: Malformed thinking block in toolcall (glm-4.7-free) - Similar issue with GLM models reporting errors in thinking/reasoning blocks during tool calls
- #6418: Error: Invalid
signatureinthinkingblock - Related issue with thinking block errors when switching between GLM models and other providers
Feel free to ignore if none of these address your specific case.
cann u share your config?
I also tried it with no options defined for the model since I'm not sure if those options are model specific or not. Here's my config:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"lmstudio": {
"npm": "@ai-sdk/openai-compatible",
"name": "LM Studio (local)",
"options": {
"baseURL": "http://192.168.0.2:1234/v1"
},
"models": {
"qwen3-30b-a3b-thinking-2507": {
"name": "Qwen3 30B A3B Thinking 2507",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"qwen3-coder-30b-a3b-instruct": {
"name": "Qwen3 Coder 30B A3B Instruct"
},
"gpt-oss-120b": {
"name": "GPT OSS 120B",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"bytedance/seed-oss-36b": {
"name": "Seed OSS 36B",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"glm-4.5-air": {
"name": "GLM 4.5 Air",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"mistralai/devstral-small-2-2512": {
"name": "Devstrall 2 Small 2512",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"arliai_glm-4.5-air-derestricted": {
"name": "GLM 4.5 Air Derestricted",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"kwaipilot_kat-dev": {
"name": "Kat-Dev",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"gpt-oss-20b": {
"name": "GPT OSS 20B",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
},
"nemotron-3-nano-30b-a3b": {
"name": "Nemotron 3 Nano 30B",
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
}
}
}
}
},
"keybinds": {
"leader": "ctrl+z",
"app_exit": "ctrl+c,ctrl+d,<leader>q",
"editor_open": "<leader>e",
"theme_list": "<leader>t",
"sidebar_toggle": "<leader>b",
"username_toggle": "none",
"status_view": "<leader>s",
"session_export": "<leader>x",
"session_new": "<leader>n",
"session_list": "<leader>l",
"session_timeline": "<leader>g",
"session_share": "none",
"session_unshare": "none",
"session_interrupt": "escape",
"session_compact": "<leader>c",
"session_child_cycle": "<leader>+right",
"session_child_cycle_reverse": "<leader>+left",
"messages_page_up": "pageup",
"messages_page_down": "pagedown",
"messages_half_page_up": "ctrl+alt+u",
"messages_half_page_down": "ctrl+alt+d",
"messages_first": "ctrl+g,home",
"messages_last": "ctrl+alt+g,end",
"messages_copy": "<leader>y",
"messages_undo": "<leader>u",
"messages_redo": "<leader>r",
"messages_last_user": "none",
"messages_toggle_conceal": "<leader>h",
"model_list": "<leader>m",
"model_cycle_recent": "f2",
"model_cycle_recent_reverse": "shift+f2",
"command_list": "ctrl+p",
"agent_list": "<leader>a",
"agent_cycle": "tab",
"agent_cycle_reverse": "shift+tab",
"input_clear": "ctrl+c",
"input_paste": "ctrl+v",
"input_submit": "return",
"input_newline": "shift+return,ctrl+return,alt+return,ctrl+j",
"input_move_left": "left,ctrl+b",
"input_move_right": "right,ctrl+f",
"input_move_up": "up",
"input_move_down": "down",
"input_select_left": "shift+left",
"input_select_right": "shift+right",
"input_select_up": "shift+up",
"input_select_down": "shift+down",
"input_line_home": "ctrl+a",
"input_line_end": "ctrl+e",
"input_select_line_home": "ctrl+shift+a",
"input_select_line_end": "ctrl+shift+e",
"input_visual_line_home": "alt+a",
"input_visual_line_end": "alt+e",
"input_select_visual_line_home": "alt+shift+a",
"input_select_visual_line_end": "alt+shift+e",
"input_buffer_home": "home",
"input_buffer_end": "end",
"input_select_buffer_home": "shift+home",
"input_select_buffer_end": "shift+end",
"input_delete_line": "ctrl+shift+d",
"input_delete_to_line_end": "ctrl+k",
"input_delete_to_line_start": "ctrl+u",
"input_backspace": "backspace,shift+backspace",
"input_delete": "ctrl+d,delete,shift+delete",
"input_undo": "ctrl+-,super+z",
"input_redo": "ctrl+.,super+shift+z",
"input_word_forward": "alt+f,alt+right,ctrl+right",
"input_word_backward": "alt+b,alt+left,ctrl+left",
"input_select_word_forward": "alt+shift+f,alt+shift+right",
"input_select_word_backward": "alt+shift+b,alt+shift+left",
"input_delete_word_forward": "alt+d,alt+delete,ctrl+delete",
"input_delete_word_backward": "ctrl+w,ctrl+backspace,alt+backspace",
"history_previous": "up",
"history_next": "down",
"terminal_suspend": "ctrl+z"
}
}
I got the same error with GLM-4.5-Air after upgrading to v1.0.223
Sometimes GLM outputs <think></think> empty tags with may be this is causing problem
I tried with version 1.1.2 and the problem still exits. The change added in version 1.0.218 #6463 causes problems with GLM models.
Also having this error with GLM 4.5 Air on latest opencode on win. Running model local FP8
ahh
I am having the same issue with 1.1.8. I reverted the #6463 locally and the issue really seems to be gone, however, the thinking block is not differentiated from the response (the