semantic-kernel icon indicating copy to clipboard operation
semantic-kernel copied to clipboard

.Net: Bug: Unable to get Reasoning Summary/Detail when using ChatClient with Reasoning model

Open MustafaJamal opened this issue 4 months ago • 3 comments

Describe the bug Unable to get Reasoning Summary/Detail when using ChatClient with Reasoning model.

To Reproduce

Here is code:

ChatOptions chatOptions = new ChatOptions()
{
    //AllowMultipleToolCalls = useFunctionCall,
    ToolMode = useFunctionCall ? ChatToolMode.Auto : ChatToolMode.None,
    Temperature = (reasoningEffort == null || reasoningEffort == string.Empty) ? (float?)temperature : null,
    ResponseFormat = convertToJSON == true
        ? ChatResponseFormat.Json : ChatResponseFormat.Text,
    FrequencyPenalty = 0,
    PresencePenalty = 0,
    Tools = tools,
};

chatOptions.AdditionalProperties = new AdditionalPropertiesDictionary();

if (!string.IsNullOrWhiteSpace(verbosity))
{
    chatOptions.AdditionalProperties["text"] = new Dictionary<string, object?>()
    {
        { "verbosity", verbosity }
    };
}

if (!string.IsNullOrEmpty(reasoningEffort))
{
    chatOptions.AdditionalProperties["reasoning"] = new Dictionary<string, object?>()
    {
        { "effort", reasoningEffort },
        {"summary", "auto"}
    };
}

///////////

 await foreach (ChatResponseUpdate c in chatSvc.GetStreamingResponseAsync(hist, chatOptions,
                    cancellationToken: cancellationToken))
 {
     bool hadReasoning = false;

     foreach (TextReasoningContent textContent in c.Contents.OfType<TextReasoningContent>()) // <<<<<----- THIS CONDITION NEVER MEETS
     {
         hadReasoning = true;
         string reasoning = $"##{textContent.Text}";
         full += reasoning;
         yield return reasoning;
     }

     if (hadReasoning)
     {
         continue;
     }

     if (!string.IsNullOrEmpty(c.Text))
     {
         full += c.Text;
         yield return c.Text;
     }

     foreach (UsageContent usageContent in c.Contents.OfType<UsageContent>())
     {
         _usageDetails = usageContent.Details;
     }

     foreach (AIContent aiContent in c.Contents)
     {
         if (aiContent is FunctionCallContent functionCallContent) 
         {
             var (pluginName, functionName) = CoPilot.Utility.ParseFunctionName(functionCallContent.Name);
             KernelArguments kargs = CoPilot.Utility.BuildKernelArguments(functionCallContent.Arguments);

             string invocationResult;
             try
             {
                 invocationResult = await CoPilot.Utility.TryInvokeKernelFunctionAsync(kernel, pluginName,
                     functionName, kargs, cancellationToken);
             }
             catch (OperationCanceledException)
             {
                 throw;
             }
             catch (Exception ex)
             {
                 _logger.LogError(ex, "Error invoking function {FunctionFullName}", functionCallContent.Name);
                 invocationResult = $"Error invoking function {functionCallContent.Name}: {ex.Message}";
             }

             try
             {
                 CoPilot.Utility.AppendToolResultToHistory(hist, invocationResult);
             }
             catch (Exception ex)
             {
                 _logger.LogError(ex, "Failed to append tool result to chat history for {FunctionName}",
                     functionCallContent.Name);
                 hist.Add(new ChatMessage(ChatRole.Tool, invocationResult));
             }
         }
     }
 }

I have used o3, gpt-5, models but none of them giving reasoning summary.

Assembly versions:

Microsoft.SemanticKernel v1.63.0 Microsoft.SemanticKernel.Connectors.AzureOpenAI v1.63.0 Microsoft.SemanticKernel.Connectors.OpenAI v1.63.0 Environment: C# ASP.NET Core 8 OS: Windows 11

Let me know if further details are needed.

Thanks, Mustafa

MustafaJamal avatar Sep 02 '25 16:09 MustafaJamal

This is currently not supported in Microsoft.Extensions.AI for streaming as OpenAI SDK doesn't expose a public API to capture streaming thinking content.

Will keep this issue open for tracking until we have an update from the upstream dependencies.

  • https://github.com/dotnet/extensions/pull/6761
  • https://github.com/openai/openai-dotnet/issues/643

rogerbarreto avatar Sep 03 '25 15:09 rogerbarreto

This is currently not supported in Microsoft.Extensions.AI for streaming as OpenAI SDK doesn't expose a public API to capture streaming thinking content.

Will keep this issue open for tracking until we have an update from the upstream dependencies.

OK, thanks for updating.

MustafaJamal avatar Sep 03 '25 15:09 MustafaJamal

@rogerbarreto the first upstream PR has been merged, but not the second. Does this mean we still can't have reasoning content while streaming? Essentially what we want is to show the user the thinking process of the o4-mini model (OpenAI). I assumed this would be easy, but I guess it's not?

roldengarm avatar Nov 10 '25 18:11 roldengarm