opencode icon indicating copy to clipboard operation
opencode copied to clipboard

Regression in 1.1.3

Open brat002 opened this issue 1 month ago โ€ข 17 comments

Description

After an upgrade I see many messages like:

<tool_call><tool_call>edit
~ Preparing edit...
Error: The edit tool was called with invalid arguments: [
  {
    "expected": "string",
    "code": "invalid_type",
    "path": [
      "filePath"
    ],
    "message": "Invalid input: expected string, received undefined"
  }
].
Please rewrite the input so it satisfies the expected schema.

Tools are different: bash, read, write.

I didn't meet this before.

Plugins

none

OpenCode version

1.1.3

Steps to reproduce

GLM-4.7, MiniMax M2.1

Screenshot and/or share link

Image

Operating System

OSX

Terminal

Ghostty

brat002 avatar Jan 05 '26 22:01 brat002

This issue might be a duplicate of existing issues. Please check:

  • #6918: qwen3-coder fails to call edit tool (same invalid_type error with edit tool arguments)

Both issues report the edit tool being called with invalid arguments resulting in 'invalid_type' errors. Feel free to ignore if this is a distinct issue.

github-actions[bot] avatar Jan 05 '26 22:01 github-actions[bot]

dont think any changes were made in this department in many releases, what provider are u using?

rekram1-node avatar Jan 06 '26 00:01 rekram1-node

I have the issue even with BigPikle:

Output


Assistant (Build ยท big-pickle ยท 5.0s)

Tool: bash

**Input:**
```json
{
  "bash\n<arg_key>command": "cat > /tmp/fix_exerciseMarkdown.js << 'ENDFILE'\nconst fs = require('fs');\n\nlet exerciseTest = fs.readFileSync('/Users/brat002/projects/lex-platform/__tests__/unit/exerciseMarkdown
.test.ts', 'utf8');\n\n// Fix getDefaultPrompt() calls\nexerciseTest = exerciseTest.replace(\n  /expect\\(exerciseNode\\.getDefaultPrompt\\(\\)\\)\\.toBe\\(originalPrompt\\);/g,\n  `expect(exerciseNode.getConfig
().lessonTopic).toBe(originalPrompt);`\n);\n\nexerciseTest = exerciseTest.replace(\n  /expect\\(exerciseNode\\.getDefaultType\\(\\)\\)\\.toBe\\(originalType\\);/g,\n  `expect(exerciseNode.getConfig().questionTyp
e).toBe(originalType);`\n);\n\nexerciseTest = exerciseTest.replace(\n  /verificationResults\\.exercisePrompt = child\\.getDefaultPrompt\\(\\);/g,\n  `verificationResults.exercisePrompt = child.getConfig().lesson
Topic;`\n);\n\nexerciseTest = exerciseTest.replace(\n  /verificationResults\\.exerciseType = child\\.getDefaultType\\(\\);/g,\n  `verificationResults.exerciseType = child.getConfig().questionType;`\n);\n\n// Fix
 $createExerciseNode calls\nexerciseTest = exerciseTest.replace(\n  /\\$createExerciseNode\\('combo', '(.*?)'\\)/g,\n  `$createExerciseNode({questionType: 'combo', lessonTopic: '$1'})`\n);\n\nexerciseTest = exer
ciseTest.replace(\n  /\\$createExerciseNode\\('answer', '(.*?)'\\)/g,\n  `$createExerciseNode({questionType: 'answer', lessonTopic: '$1'})`\n);\n\nexerciseTest = exerciseTest.replace(\n  /\\$createExerciseNode\\
('combo', ''\\)/g,\n  `$createExerciseNode({questionType: 'combo'})`\n);\n\nfs.writeFileSync('/Users/brat002/projects/lex-platform/__tests__/unit/exerciseMarkdown.test.ts', exerciseTest);\nconsole.log('Fixed Exe
rciseNode API calls');\nENDFILE\nnode /tmp/fix_exerciseMarkdown.js",
  "description": "Create and run fix script for exerciseMarkdown test"
}

Error:

Error: The bash tool was called with invalid arguments: [
  {
    "expected": "string",
    "code": "invalid_type",
    "path": [
      "command"
    ],
    "message": "Invalid input: expected string, received undefined"
  }
].
Please rewrite the input so it satisfies the expected schema.

---

brat002 avatar Jan 06 '26 08:01 brat002

Same here. Using Z.ai provider with GLM 4.7. I got some messages error containing <tool_call> and the process just stops. Trying to reprocess will keep failing until closing the tui and reopening. I've also never seen an error like this before 1.1.3

I've also got the following when using opus 4.5 (claude pro/max provider):

messages.65: tool_use ids were found without tool_result blocks immediately after: toolu_01YKQ6wVzdQ79xM5VaGejoK2. Each tool_use block must have a corresponding tool_result block in the next message.

this also kills the process and its not possible to continue

johnyvelho avatar Jan 06 '26 16:01 johnyvelho

can anyone send me:

opencode export > session.json

Run this in ur terminal and send file

rekram1-node avatar Jan 06 '26 18:01 rekram1-node

can anyone send me:

opencode export > session.json

Run this in ur terminal and send file

@rekram1-node

messages.1: tool_use ids were found without tool_result blocks immediately after: toolu_01LVL5ydPYorBqFmTLbUPtrk. Each tool_use block must have a corresponding tool_result block in the next message.

session-ses_4688.md

jimitshah77 avatar Jan 07 '26 08:01 jimitshah77

can anyone send me:

opencode export > session.json

Run this in ur terminal and send file

session-ses_466f.md

alikehel avatar Jan 07 '26 15:01 alikehel

can you please run the command I sent you? notice that it is a json file? I want the actual json for the session plz!

rekram1-node avatar Jan 07 '26 16:01 rekram1-node

can you please run the command I sent you? notice that it is a json file? I want the actual json for the session plz!

session.json

I think it only happens when trying to write JSON, notice the last write succeeded because it was just text.

alikehel avatar Jan 07 '26 16:01 alikehel

can anyone send me: opencode export > session.json Run this in ur terminal and send file

@rekram1-node

messages.1: tool_use ids were found without tool_result blocks immediately after: toolu_01LVL5ydPYorBqFmTLbUPtrk. Each tool_use block must have a corresponding tool_result block in the next message.

session-ses_4688.md

session.json

graffhyrum avatar Jan 08 '26 00:01 graffhyrum

this bug completely destroyed all the workflows , please treat it as high priority, i'll buy you a round ๐Ÿบ . Detected crashing on GLM4.7

Session snippet:

---

## Assistant (Build ยท glm-4.7 ยท 6.1s)

_Thinking:_

The edit tool is failing consistently. This is very frustrating. Let me try a minimal change - just adding isFormLocked computed to track the refs correctly.

Actually, I already added isFormLocked in my last edit (lines 8-18), but the tool is rejecting it. The issue might be that the refs (isCreatingDraft, isSavingDraft, isPublishing) don't exist yet. Let me check if those refs are defined earlier in the file.

```
Tool: grep

**Input:**
```json
{
  "path": "REDACTED/new.vue",
  "pattern": "const isCreatingDraft ="
}
```
**Output:**
```
Found 1 matches
REDACTED/new.vue:
  Line 263: const isCreatingDraft = ref(false)
```
```

---

## Assistant (Build ยท glm-4.7 ยท 46.5s)

_Thinking:_

The refs exist at line 263 onwards (isProvidersLoaded at 263, isTypesLoaded at 266, isCategoriesLoaded at 269, isFormLocked at 272). isFormLocked at line 272 is computed. That's after the functions I need to create.

Let me add just the isFormLocked computed and then leave functions for later.

```
Tool: edit

**Input:**
```json
{
  "filePath": "REDACTED/new.vue",
  "newString": "const stepCount = 3\n\n// Track loading states explicitly\nconst isProvidersLoaded = ref(false)\nconst isTypesLoaded = ref(false)\nconst isCategoriesLoaded = ref(false)\n\n// Form locking - prevents editing during save operations\nconst isFormLocked = computed(() =>\n  isCreatingDraft.value ||\n  isSavingDraft.value ||\n  isPublishing.value\n)\n\nconst step1Valid = computed(() => {\n  if (!isProvidersLoaded.value || !isTypesLoaded.value || !isCategoriesLoaded.value) {\n    return false\n  }\n  return form.name.trim().length > 0 &&\n         form.providerId.trim().length > 0 &&\n         form.serviceTypeId.trim().length > 0 &&\n         form.serviceCategoryId.trim().length > 0\n})"
}
```
**Error:**
```
Error: The edit tool was called with invalid arguments: [
  {
    "expected": "string",
    "code": "invalid_type",
    "path": [
      "oldString"
    ],
    "message": "Invalid input: expected string, received undefined"
  }
].
Please rewrite the input so it satisfies the expected schema.
```
```

---

## Assistant (Build ยท glm-4.7 ยท 32.9s)

---

xfhg avatar Jan 08 '26 03:01 xfhg

This is a bug that happens when tool interrupted while thinking and OpenCode not adds tool result right after tool request.

Tool Validation Error: Missing tool_result After Interruption

Root Cause

Client doesn't insert tool_result message after interruption before allowing model to continue.

When abort happens, client appends retry to the SAME assistant message instead of starting a new turn with tool_result in between.

Summary

Anthropic's Messages API rejects conversation history when an assistant message contains content (thinking/text) AFTER a tool_use block. The API expects tool_result immediately after tool_use, but when content appears in between, validation fails.

Error Message

messages.{N}: `tool_use` ids were found without `tool_result` blocks immediately after: {tool_id}. 
Each `tool_use` block must have a corresponding `tool_result` block in the next message.

The Rule

All tool_use blocks must be contiguous at the END of the assistant message content.

Once a tool_use block appears, only more tool_use blocks can follow. Any other content (thinking, text) AFTER a tool_use is invalid.

Valid vs Invalid Patterns

Valid: Tools grouped at end

assistant: [thinking, text, tool_use_A, tool_use_B]
user: [tool_result_A, tool_result_B]

Valid: Separate turns per tool

assistant: [thinking, text, tool_use_A]
user: [tool_result_A]
assistant: [thinking, text, tool_use_B]
user: [tool_result_B]

Invalid: Content AFTER tool_use

assistant: [thinking, tool_use_A, thinking, tool_use_B]  โ† thinking AFTER tool_A!
user: [tool_result_A, tool_result_B]

Error: tool_use ids were found without tool_result blocks immediately after: tool_A

The API sees content after tool_use_A and interprets it as: "tool_A should have gotten its result before the assistant generated more content."

How This Happens

When a user aborts a tool mid-stream:

What should happen:

assistant: [thinking, tool_A]       โ† END turn here
user: [tool_result_A: "aborted"]    โ† INSERT this
assistant: [thinking, tool_B]       โ† NEW turn
user: [tool_result_B]

What actually happens:

assistant: [thinking, tool_A, thinking, tool_B]  โ† appended to SAME message
user: [tool_result_A, tool_result_B]             โ† both results bundled

The sequence:

  1. Model generates: thinking โ†’ tool_A
  2. User aborts during tool execution
  3. Client saves: thinking โ†’ tool_A (aborted)
  4. BUG: Client does NOT end turn and add tool_result
  5. Model retries, client appends: thinking โ†’ tool_B
  6. Result: [thinking, tool_A, thinking, tool_B] - invalid pattern

Other causes:

  • Network interruption during streaming
  • Timeout kills the stream
  • Client crash during streaming
  • Rate limiting mid-response

Real-World Example

Saved message parts

prt_..5001 โ†’ step-start
prt_..7001 โ†’ reasoning: "The user wants me to create a markdown file..."
prt_..6001 โ†’ tool: write (ERROR, input: {})  โ† ABORTED
prt_..b001 โ†’ reasoning: "The user wants me to create a markdown file..."
prt_..7001 โ†’ tool: write (COMPLETED)
prt_..c001 โ†’ step-finish

Converted to API format:

{
  "role": "assistant",
  "content": [
    {"type": "thinking", "thinking": "..."},
    {"type": "tool_use", "id": "toolu_01LiPBp", "name": "write", "input": {}},
    {"type": "thinking", "thinking": "..."},
    {"type": "tool_use", "id": "toolu_016rCwe", "name": "write", "input": {...}}
  ]
}

API Error:

messages.49: `tool_use` ids were found without `tool_result` blocks immediately after: toolu_01LiPBp

Workaround

Before sending to API, restructure messages with interleaved content:

Option A - Split into separate turns:

Original: assistant[thinking, tool_A, thinking, tool_B] + user[result_A, result_B]

Fixed:
  assistant[thinking, tool_A] + user[result_A]
  assistant[thinking, tool_B] + user[result_B]

Option B - Group tools at end (if results exist):

Original: [thinking, tool_A, thinking, tool_B]

Fixed: [thinking, thinking, tool_A, tool_B]

Note: Only works if the tool results are bundled together in the next message.

Test Results

Pattern Result
[thinking, text, tool_A, tool_B] โ†’ [result_A, result_B] โœ… SUCCESS
[thinking, tool_A, thinking, tool_B] โ†’ [result_A, result_B] โŒ FAILED
[thinking, text, tool_A, thinking, text, tool_B] โ†’ [result_A, result_B] โŒ FAILED
[thinking, text, tool_A] โ†’ [result_A] โ†’ [thinking, text, tool_B] โ†’ [result_B] โœ… SUCCESS

Minimal Reproduction

curl -X POST https://api.anthropic.com/v1/messages \
  -H "x-api-key: $ANTHROPIC_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: interleaved-thinking-2025-05-14" \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-sonnet-4-20250514",
    "max_tokens": 2048,
    "thinking": {"type": "enabled", "budget_tokens": 1024},
    "messages": [
      {"role": "user", "content": "Do two things"},
      {
        "role": "assistant",
        "content": [
          {"type": "thinking", "thinking": "First.", "signature": "VALID_SIG_HERE"},
          {"type": "tool_use", "id": "tool_1", "name": "do_thing", "input": {"x": 1}},
          {"type": "thinking", "thinking": "Second.", "signature": "VALID_SIG_HERE"},
          {"type": "tool_use", "id": "tool_2", "name": "do_thing", "input": {"x": 2}}
        ]
      },
      {
        "role": "user",
        "content": [
          {"type": "tool_result", "tool_use_id": "tool_1", "content": "done"},
          {"type": "tool_result", "tool_use_id": "tool_2", "content": "done"}
        ]
      }
    ],
    "tools": [{"name": "do_thing", "description": "Do", "input_schema": {"type": "object", "properties": {"x": {"type": "integer"}}}}]
  }'

Expected error:

{
  "type": "error",
  "error": {
    "type": "invalid_request_error",
    "message": "messages.1: `tool_use` ids were found without `tool_result` blocks immediately after: tool_1"
  }
}

westito avatar Jan 08 '26 14:01 westito

There was no regression here this is just models failing to call tools (op) and then some of you are discussing an entirely different issue w/ tool use id stuff which is also not a regression in 1.1.3

rekram1-node avatar Jan 08 '26 15:01 rekram1-node

I am not agree, because MiniMax and BigPickle also have the same issue and I never seen this before an upgrade.

brat002 avatar Jan 08 '26 15:01 brat002

There was no regression here this is just models failing to call tools (op) and then some of you are discussing an entirely different issue w/ tool use id stuff which is also not a regression in 1.1.3

If a model hits step-finish unexpectedly when attempting to make tool calls, is this a model defect, a tool defect, both, or something else? I'm happy to try to conduct some RCA, but I'm unsure where to start or which variables to isolate.

graffhyrum avatar Jan 08 '26 19:01 graffhyrum

We can't change a model, but we may make opencode to work with this behaviour. Currently it is broken for all of the mentioned models.

brat002 avatar Jan 08 '26 19:01 brat002

I use a lot of the models in Cline and other tools. None of them have such problems with them.

brat002 avatar Jan 09 '26 09:01 brat002