autogen icon indicating copy to clipboard operation
autogen copied to clipboard

OpenAI Chat Completion request's message with assistant role and tool calls is missing content property

Open qcloop opened this issue 1 year ago • 1 comments

What happened?

Here's an intercepted Chat Completion call:

{ 'messages': [ { 'content': 'You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.', 'role': 'system' }, { 'content': 'What is the weather in New York?', 'role': 'user', 'name': 'user' }, { 'tool_calls': [ { 'id': 'call_iZKkdwu577dckxO14WkcFvJS', 'function': { 'arguments': '{"city":"New York"}', 'name': 'get_weather' }, 'type': 'function' } ], 'role': 'assistant', 'name': 'weather_agent' }, { 'content': 'The weather in New York is 73 degrees and Sunny.', 'role': 'tool', 'tool_call_id': 'call_iZKkdwu577dckxO14WkcFvJS' } ], 'model': 'gpt-4o-2024-08-06', 'stream': False, 'tools': [ { 'type': 'function', 'function': { 'name': 'get_weather', 'description': '', 'parameters': { 'type': 'object', 'properties': { 'city': { 'description': 'city', 'title': 'City', 'type': 'string' } }, 'required': [ 'city' ] } } } ] }

The message with assistant role should be the following:

{ "tool_calls": [ { "id": "call_iZKkdwu577dckxO14WkcFvJS", "function": { "arguments": "{'city':'New York'}", "name": "get_weather" }, "type": "function" } ], "role": "assistant", "content":null, "name": "weather_agent" }

not

{ "tool_calls": [ { "id": "call_iZKkdwu577dckxO14WkcFvJS", "function": { "arguments": "{'city':'New York'}", "name": "get_weather" }, "type": "function" } ], "role": "assistant", "name": "weather_agent" }

What did you expect to happen?

Expect the Chat Completions request with assistant role, to have content property set to null:

{ "messages": [ { "content": "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.", "role": "system" }, { "content": "What is the weather in New York?", "role": "user", "name": "user" }, { "tool_calls": [ { "id": "call_iZKkdwu577dckxO14WkcFvJS", "function": { "arguments": "{'city':'New York'}", "name": "get_weather" }, "type": "function" } ], "role": "assistant", "content":null, "name": "weather_agent" }, { "content": "The weather in New York is 73 degrees and Sunny.", "role": "tool", "tool_call_id": "call_iZKkdwu577dckxO14WkcFvJS" } ], "model": "gpt-4o-2024-08-06", "stream": false, "tools": [ { "type": "function", "function": { "name": "get_weather", "description": "", "parameters": { "type": "object", "properties": { "city": { "description": "city", "title": "City", "type": "string" } }, "required": [ "city" ] } } } ] }

How can we reproduce it (as minimally and precisely as possible)?

Using debug logging or add request interceptor loggers to an embedded Httpx client and run weather sample agent example:

{'method': 'post', 'url': '/chat/completions', 'files': None, 'json_data': {'messages': [{'content': 'You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.', 'role': 'system'}, {'content': 'What is the weather in New York?', 'role': 'user', 'name': 'user'}], 'model': 'gpt-4o-2024-08-06', 'stream': False, 'tools': [{'type': 'function', 'function': {'name': 'get_weather', 'description': '', 'parameters': {'type': 'object', 'properties': {'city': {'description': 'city', 'title': 'City', 'type': 'string'}}, 'required': ['city']}}}]}

AutoGen version

0.4.0.dev8

Which package was this bug in

Core

Model used

No response

Python version

3.12

Operating system

No response

Any additional info you think would be helpful for fixing this bug

No response

qcloop avatar Dec 03 '24 22:12 qcloop

@qcloop could you provide a code snippet that reproduces this? Does this result in errors when using Azure or OpenAI endpoints?

Also format the output under markdown code blocks with indentation can help us inspect the log better.

ekzhu avatar Dec 04 '24 05:12 ekzhu

@ekzhu, I have encountered the same issue when trying to use AutoGen over an OpenAI-compatible endpoint provided by a LiteLLM-powered local proxy.

could you provide a code snippet that reproduces this?

https://gist.github.com/aromanovich/c15eee04c2c3d1a08083cf789688992c#file-repro-py

https://gist.github.com/aromanovich/c15eee04c2c3d1a08083cf789688992c#file-request-received-json it the request made by AutoGen. A message with the assistant role is missing content property.

aromanovich avatar May 02 '25 10:05 aromanovich