关于structured_model无法结构化输出智能体回复
定义结构化输出类:
class AnswerModel(BaseModel): answer: Optional[str] = Field(None, description="The final answer.")
在运行agent时使用
structired_mode=AnswerModel尝试结构化输出结果,但是实际获得的结果如下:
Msg(id='CuSNZ4CWq72Xa6cz7a2d2W', name='问答助手', content='Looking at the discography section of the Mercedes Sosa Wikipedia page, I need to count her studio albums released between 2000 and 2009.\n\nFrom the studio albums list, I can see:\n- 2005: Corazón Libre\n- 2009: Cantora 1\n- 2009: Cantora 2\n\nThere are 3 studio albums published by Mercedes Sosa between 2000 and 2009.\n\n3', role='assistant', metadata={'answer': None}, timestamp='2025-11-11 13:59:12.219', invocation_id='None')
也就是通过rest.metadata获得的结果是一个None。
使用的模型是qwen3-coder-480B,使用了ReActAgent,OpenAIChatMode,temperature设置为0.
问答内容为GAIA测试集中的:
{ "task_id": "8e867cd7-cff9-4e6c-867a-ff5ddc2550be", "Question": "How many studio albums were published by Mercedes Sosa between 2000 and 2009 (included)? You can use the latest 2022 version of english wikipedia.", "Level": 1, "Final answer": "3", "file_name": "", "Annotator Metadata": { "Steps": "1. I did a search for Mercedes Sosa\n2. I went to the Wikipedia page for her\n3. I scrolled down to \"Studio albums\"\n4. I counted the ones between 2000 and 2009", "Number of steps": "4", "How long did this take?": "5 minutes", "Tools": "1. web browser\n2. google search", "Number of tools": "2" } }
补充:如果使用默认temperature(不设置),有一定几率可以在metadata中获得数字3或者three
模型切换成GLM-4.6-Think时会出现以下错误:
system: { "type": "tool_result", "id": "call_190e6baa5ba34fb8914194c0", "name": "generate_response", "output": [ { "type": "text", "text": "Arguments Validation Error: 1 validation error for AnswerModel\nanswer\n Input should be a valid string [type=string_type, input_value=3, input_type=int]\n For further information visit https://errors.pydantic.dev/2.12/v/string_type" } ] }
但是结果可以在metadata中获取到3
- 首先对于GLM-4.6这个模型,问题出在了AnswerModel定义的answer字段为str类型,但是这里GLM-4.6给出的参数是int类型,因此这里报错了,理论上来说期待LLM在看到这个错误之后可以自我纠正
- 对于Qwen-omni-coder-480B,请问是如何构建的模型服务,是使用的官方的API,还是本地模型推理?
- 首先对于GLM-4.6这个模型,问题出在了AnswerModel定义的answer字段为str类型,但是这里GLM-4.6给出的参数是int类型,因此这里报错了,理论上来说期待LLM在看到这个错误之后可以自我纠正
- 对于Qwen-omni-coder-480B,请问是如何构建的模型服务,是使用的官方的API,还是本地模型推理?
-
好的,我尝试了几次后发现GLM-4.6这个模型在后续的处理中会纠正该问题了。
-
qwen使用的私有模型,具体构建方式由于不是我构建的确实不清楚,请问官方API测试过是否没有异常?我在使用多个不同私有模型时发现这个问题存在于包括gpt-oss qwen3系列的不同模型中,这里是由于大模型没调用格式化工具导致吗?有没有什么解决方案?
This issue is marked as stale because there has been no activity for 21 days. Remove stale label or add new comments or this issue will be closed in 3 day.
Close this stale issue.