opencode icon indicating copy to clipboard operation
opencode copied to clipboard

Opencode with GLM 4.5 String issue

Open Abdiazizahmedali opened this issue 5 months ago • 18 comments

opencode stops working in the middle and gives this error ; AI_InvalidResponseDataError: Expected 'id' to be a string.

Abdiazizahmedali avatar Aug 13 '25 00:08 Abdiazizahmedali

Same error here

legout avatar Aug 13 '25 10:08 legout

I receive the same error message when using the Chutes provider with the GLM 4.5 Air Model. Using the Qwen3 Model via Chutes does work, so it seems to be tied to the GLM models

beyondpixeldesign avatar Aug 13 '25 13:08 beyondpixeldesign

Yes exactly i also receive that error via chutes api . Its the GLM models response format i guess. @thdxr Kindly assist to accommodate GLM models better .

Abdiazizahmedali avatar Aug 13 '25 14:08 Abdiazizahmedali

Seeing the same with openrouter and standard GLM 4.5 model.

Yesseuki avatar Aug 14 '25 03:08 Yesseuki

Image

From My experience if you try to jailbreak it asking something it has been asked to not to do. I see this issue.

theoriginalaiexplorer avatar Aug 15 '25 11:08 theoriginalaiexplorer

I can reproduce this error on v0.5.5 version of Opencode. What I found is that if you use Chutes AI as a provider and choose GLM 4.5 FP8 model, every time when the model create the todo list by calling this tool it shows the above error. Occasionally, when this model called the read file tool it also showed the above. I tried the same model with same provider on Kilo code to check if it is provider's issue but seems it worked fine on that.

cgycorey avatar Aug 17 '25 11:08 cgycorey

Facing the same for days now. I think it might be on the model?

stickyburn avatar Aug 27 '25 14:08 stickyburn

I'll subscribe to this issue too, as running unsloth's GLM-4.5-Air-UD-Q4_K_XL, served by llama.cpp, using opencode has been a challenge and I'm out of ideas. My opencode.json contains:

"models": {
        "GLM-4.5-Air-UD-Q4_K_XL": {
            "name": "GLM Air 4.5 Q4",
            "tool_call": true,
            "reasoning": true,            
            "options": { "num_ctx": 131072 }
        }
}

I've done some experimentation with removing some of the options or mixing which options to use, but I haven't yet found a magic combination. The read tools seem to be working but trying to use any tools that write files just fails with errors.

The same model served from the same server works 100% in Cline/VSCode, but I'm really looking for a good terminal based coder that works with GLM 4.5, since it's the best model I can run on my local setup. I did try gpt-oss:20b/ollama and it worked with opencode. I assume 120b would also work. I do have qwen3-coder:30b-a3b, which also works nicely with Cline, it's just not as strong as GLM 4.5 air.

Hopefully someone can point me to a working config for opencode, or recommend another capable terminal coder that will work with GLM-4.5, I'm not particularly tied to opencoder since I've never been able to get it to work correctly.

Thank you all, in advance, for your help.

aaronnewsome avatar Sep 02 '25 21:09 aaronnewsome

This issue still exists

uphg avatar Sep 11 '25 01:09 uphg

I was able to get opencode talking to my GLM-4.5-AIR, served by llama.cpp with this chat template

https://huggingface.co/unsloth/GLM-4.5-Air-GGUF/discussions/1

I was even get it to call some tools, write some files but it's about is flaky as they come. Still looking for some guidance from anyone has this setup working. I'm tied to opencode, but I'd be willing to give any competent terminal coder a try. For now, I'll just be happy that Cline seems to work pretty well.

aaronnewsome avatar Sep 11 '25 03:09 aaronnewsome

This happens when the model hallucinates the tool name and the chat template (on the inference server, like at z.ai) fails to parse it.

See: https://github.com/sst/opencode/issues/2557#issuecomment-3304756831

I wonder if there can be a special case to handle this error message and treat it like a failed tool call (it was going to fail anyway), prompting the LLM to try again.

jdiaz5513 avatar Sep 17 '25 22:09 jdiaz5513

@aaronnewsome you probably want to watch this thread; I've run this branch of llama.cpp locally and tool calling is much better (flawless?) there.

Any inference providers using llama.cpp in production will also benefit from this patch; hard to say if that's going to affect z.ai as well. They may be using vLLM.

jdiaz5513 avatar Sep 19 '25 01:09 jdiaz5513

Thanks for the tip @jdiaz5513 , I pulled the latest b6517, applied the pr 15904 and built it. I'm running llama.cpp with the chat template in the PR and opencode is working much, much better. It was working somewhat with the chat template in comments on the GLM-4.5-Air huggingface page, tool calls would succeed but then the chat would just stop instead of continuing to the next step. Super annoying to keep typing "proceed" or "continue" after each and every tool call. Talk about a productivity killer. I'm not seeing the issue any longer with PR 15904. Hopefully the PR gets merged soon, so it's in the main. Again, thank you for the tip.

aaronnewsome avatar Sep 19 '25 14:09 aaronnewsome

I just had this AI_InvalidResponseDataError: Expected 'id' to be a string error repeatedly from Big Pickle (GLM 4.6) when it tried to edit, and finally I realised I had it in Plan mode...

rversteegen avatar Nov 01 '25 10:11 rversteegen

Same here, except I'm using Bifrost v1.3.24 as a self-hosted LLM gateway with anthropic/claude-sonnet-4-5 model.

Before, I had an issue where OpenCode treats an optional parameter as required on the response. Might be related but it requires further investigation.

vinicius507 avatar Nov 12 '25 21:11 vinicius507

I have the error with GLM-4.7 and vLLM. Gemini CLI says that this is a bug in the inference engine (in my case vLLM). According to the OpenAI API specification, an id in the first chunk of a tool call is mandatory. AI SDK is very strict and enforces this: https://github.com/vercel/ai/blob/c041f749c58cd6669a0abadccc7530e8949bf532/packages/openai-compatible/src/chat/openai-compatible-chat-language-model.ts#L464

Ununnilium avatar Jan 05 '26 17:01 Ununnilium

See my short writeup at https://github.com/BerriAI/litellm/issues/13124#issuecomment-3759810148 , using GLM 4.7 hosted via litellm and encountering a similar issue.

stromseng avatar Jan 16 '26 12:01 stromseng