opencode icon indicating copy to clipboard operation
opencode copied to clipboard

Devstral via LiteLLM OpenAI-compatible fails with invalid_request_message_order on first file edit

Open MaximilianHess opened this issue 3 weeks ago • 2 comments

Description

When OpenCode is configured to use Devstral (devstral-2512) through the OpenAI-compatible provider path (LiteLLM OpenAI-compatible endpoint), the run fails as soon as the agent performs an edit operation (i.e., when it tries to edit a file and then continue). The failure occurs immediately at the point where OpenCode attempts to proceed after the edit.

Error message:

litellm.BadRequestError: MistralException - {"object":"error","message":"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant","type":"invalid_request_message_order","param":null,"code":"3230"}. Received Model Group=devstral-2512 Available Model Group Fallbacks=None

The same workflow does not fail when Devstral is used through the native Mistral provider path (mistral_v1).

Our config:

{
  "$schema": "https://opencode.ai/config.json",

  "tools": {
    "bash": true,
    "edit": true,
    "write": true,
    "read": true,
    "grep": true,
    "glob": true,
    "list": true,
    "lsp": true,
    "patch": true,
    "skill": true,
    "todowrite": true,
    "todoread": true,
    "webfetch": true

  },

  "provider": {
    "ascii-mistral": {
      "npm": "@ai-sdk/mistral",
      "name": "ASCII LiteLLM (Mistral native)",
      "options": {
        "baseURL": "https://llm.ascii.ac.at/mistral/v1",
        "apiKey": "{env:ASCII_AGENTIC_CODING_KEY}"
      },
      "models": {
        "devstral-2512": {
          "name": "devstral-2512 (via LiteLLM)",
          "limit": { "context": 256000, "output": 256000 },
          "tool_call": true,
        },
        "codestral-2508": {
          "name": "codestral-2508 (via LiteLLM)",
          "limit": { "context": 128000, "output": 128000 },
          "tool_call": true,
        },
      }
    },
    "ascii-oai-compatible": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "ASCII LiteLLM",
      "options": {
        "baseURL": "https://llm.ascii.ac.at/v1",
        "apiKey": "{env:ASCII_AGENTIC_CODING_KEY}"
      },
      "models": {
        "devstral-2512": {
          "name": "devstral-2512 (via LiteLLM)",
          "limit": { "context": 256000, "output": 256000 },
          "tool_call": true,
        },
        "codestral-2508": {
          "name": "codestral-2508 (via LiteLLM)",
          "limit": { "context": 128000, "output": 128000 },
          "tool_call": true,
        },
      }
    }
  },

  "model": "ascii-oai-compatible/devstral-2512"
}


Is this likely a opencode or litellm issue?

OpenCode version

1.0.207

Steps to reproduce

No response

Screenshot and/or share link

No response

Operating System

No response

Terminal

No response

MaximilianHess avatar Dec 29 '25 10:12 MaximilianHess

This issue might be related to existing LiteLLM and OpenAI-compatible provider issues. Please check:

  • #6244: Response terminates prematurely when using Gemini 3 via LiteLLM (similar LiteLLM provider issue)
  • #5674: Custom OpenAI-compatible provider options not being passed to API calls
  • #2915: LiteLLM error: Anthropic doesn't support tool calling without tools= param

These issues all involve LiteLLM proxies or OpenAI-compatible providers and may provide context for troubleshooting.

Feel free to ignore if your specific case with Devstral's message ordering is a distinct issue.

github-actions[bot] avatar Dec 29 '25 10:12 github-actions[bot]

I had the same issue with vllm; mistral_common.exceptions.InvalidMessageStructureException: Unexpected role 'user' after role 'tool'

The workaround from #2440 is to ensure your model id has the string "mistral" in it; devstral should probably be added too: https://github.com/anomalyco/opencode/blob/e5a868157e7ea89a632a43d6c15fd20128c71144/packages/opencode/src/provider/transform.ts#L58

maxious avatar Jan 06 '26 10:01 maxious

@MaximilianHess there is a config to ensure alternating roles for litellm for Mistral based models

litellm_settings:
  request_timeout: 300
  drop_params: true
  set_verbose: true
  modify_params: true
  default_litellm_params:
    ensure_alternating_roles: true
    user_continue_message:
      role: user
      content: Please continue.
    assistant_continue_message:
      role: assistant
      content: Please continue.

eleqtrizit avatar Jan 18 '26 18:01 eleqtrizit

One more thing, I don't know how you'r running vLLM, but only the nightlies support tool calling for OpenCode due to another bug with streaming tool calls, which is what OpenCode makes.

eleqtrizit avatar Jan 18 '26 18:01 eleqtrizit

Works for me!

geoHeil avatar Jan 19 '26 21:01 geoHeil