.Net: Bug: Unable to get Reasoning Summary/Detail when using ChatClient with Reasoning model
Describe the bug Unable to get Reasoning Summary/Detail when using ChatClient with Reasoning model.
To Reproduce
Here is code:
ChatOptions chatOptions = new ChatOptions()
{
//AllowMultipleToolCalls = useFunctionCall,
ToolMode = useFunctionCall ? ChatToolMode.Auto : ChatToolMode.None,
Temperature = (reasoningEffort == null || reasoningEffort == string.Empty) ? (float?)temperature : null,
ResponseFormat = convertToJSON == true
? ChatResponseFormat.Json : ChatResponseFormat.Text,
FrequencyPenalty = 0,
PresencePenalty = 0,
Tools = tools,
};
chatOptions.AdditionalProperties = new AdditionalPropertiesDictionary();
if (!string.IsNullOrWhiteSpace(verbosity))
{
chatOptions.AdditionalProperties["text"] = new Dictionary<string, object?>()
{
{ "verbosity", verbosity }
};
}
if (!string.IsNullOrEmpty(reasoningEffort))
{
chatOptions.AdditionalProperties["reasoning"] = new Dictionary<string, object?>()
{
{ "effort", reasoningEffort },
{"summary", "auto"}
};
}
///////////
await foreach (ChatResponseUpdate c in chatSvc.GetStreamingResponseAsync(hist, chatOptions,
cancellationToken: cancellationToken))
{
bool hadReasoning = false;
foreach (TextReasoningContent textContent in c.Contents.OfType<TextReasoningContent>()) // <<<<<----- THIS CONDITION NEVER MEETS
{
hadReasoning = true;
string reasoning = $"##{textContent.Text}";
full += reasoning;
yield return reasoning;
}
if (hadReasoning)
{
continue;
}
if (!string.IsNullOrEmpty(c.Text))
{
full += c.Text;
yield return c.Text;
}
foreach (UsageContent usageContent in c.Contents.OfType<UsageContent>())
{
_usageDetails = usageContent.Details;
}
foreach (AIContent aiContent in c.Contents)
{
if (aiContent is FunctionCallContent functionCallContent)
{
var (pluginName, functionName) = CoPilot.Utility.ParseFunctionName(functionCallContent.Name);
KernelArguments kargs = CoPilot.Utility.BuildKernelArguments(functionCallContent.Arguments);
string invocationResult;
try
{
invocationResult = await CoPilot.Utility.TryInvokeKernelFunctionAsync(kernel, pluginName,
functionName, kargs, cancellationToken);
}
catch (OperationCanceledException)
{
throw;
}
catch (Exception ex)
{
_logger.LogError(ex, "Error invoking function {FunctionFullName}", functionCallContent.Name);
invocationResult = $"Error invoking function {functionCallContent.Name}: {ex.Message}";
}
try
{
CoPilot.Utility.AppendToolResultToHistory(hist, invocationResult);
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to append tool result to chat history for {FunctionName}",
functionCallContent.Name);
hist.Add(new ChatMessage(ChatRole.Tool, invocationResult));
}
}
}
}
I have used o3, gpt-5, models but none of them giving reasoning summary.
Assembly versions:
Microsoft.SemanticKernel v1.63.0 Microsoft.SemanticKernel.Connectors.AzureOpenAI v1.63.0 Microsoft.SemanticKernel.Connectors.OpenAI v1.63.0 Environment: C# ASP.NET Core 8 OS: Windows 11
Let me know if further details are needed.
Thanks, Mustafa
This is currently not supported in Microsoft.Extensions.AI for streaming as OpenAI SDK doesn't expose a public API to capture streaming thinking content.
Will keep this issue open for tracking until we have an update from the upstream dependencies.
- https://github.com/dotnet/extensions/pull/6761
- https://github.com/openai/openai-dotnet/issues/643
This is currently not supported in
Microsoft.Extensions.AIfor streaming as OpenAI SDK doesn't expose a public API to capture streaming thinking content.Will keep this issue open for tracking until we have an update from the upstream dependencies.
OK, thanks for updating.
@rogerbarreto the first upstream PR has been merged, but not the second. Does this mean we still can't have reasoning content while streaming? Essentially what we want is to show the user the thinking process of the o4-mini model (OpenAI). I assumed this would be easy, but I guess it's not?