ApplicationInsights-dotnet icon indicating copy to clipboard operation
ApplicationInsights-dotnet copied to clipboard

Exception messages are not logged when the message exceed max length

Open ravibha opened this issue 4 years ago • 4 comments

Hi all, We are observing an issue with logging of exceptions in ApplicationInsights, when the message length is > 32K limit. My expectation is if the messages are greater than the limit, they should be truncated by the sdk and logged to appinsights. However, I don't see that happening. Instead I don't see an exception logged in appinsights. I do see a trace logged though.

Is this a known issue?

For now, the workaround I am using is to create a ExceptionTelemetry object with truncated message and call TrackException(ExceptionTelemetry exceptionTelemetry)

I did notice, there is a Sanitize method for ExceptionTelemetry. Not sure if it is called.

ravibha avatar May 26 '21 22:05 ravibha

I also hit a similar issue when there are AggregateExceptions with long inner exception messages or a lot of inner exceptions, the exceptions are not logged in appinsights. Looks like both might have the same cause.

Package versions:

<PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.17.0" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.11" />

Below is a simple repro with a Azure function. In the catch block, the trace is logged with the AggregateException but I don’t see any exceptions being logged. Would be great to have a fix or a workaround for this.

public static class Function1
    {
        private static TelemetryClient telemetryClient;
        private static string ExceptionText ="<Some long text>";
 
        [FunctionName("Function1")]
        public static void Run([TimerTrigger("0 */1 * * * *")]TimerInfo myTimer, ILogger log)
        {
            log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
            var telemetryProperties = new Dictionary<string, string>();
            telemetryProperties.Add("KeyResultId", "2323");
            telemetryClient = new TelemetryClient(new TelemetryConfiguration { InstrumentationKey = "2c43572f-e11d-4d0b-a053-761a3387a225" }); 
            telemetryClient.TrackTrace("Trace from azure function"); 
            Action<int> job = i => throw new TimeoutException(ExceptionText); 
            // we want many tasks to run in parallel
            var tasks = new Task[100];
            for (var i = 0; i < 100; i++)
            {
                int j = i;
                Task task = Task.Run(() => job(j));
                tasks[i] = task;
            }
 
            try
            {
                // wait for all the tasks to finish in a blocking manner
                Task.WaitAll(tasks);
            }
            catch (Exception ex)
            {
                telemetryClient.TrackTrace($"Exception: {ex}");
                telemetryClient.TrackException(ex, telemetryProperties);
            }
        }
    }

Thanks!

ravibha avatar Jun 23 '21 17:06 ravibha

Any update on this, can we prioritize? This seems critical for troubleshooting and should be covered in the SDK without having manual code to work around this?

andystumpp avatar Oct 14 '21 18:10 andystumpp

related: https://github.com/microsoft/ApplicationInsights-dotnet/issues/2482

cijothomas avatar Nov 30 '21 16:11 cijothomas

This is an issue for us as well. The only way we can work around it is with reflection.

Daniel-Guenter avatar Dec 03 '21 21:12 Daniel-Guenter

This issue is stale because it has been open 300 days with no activity. Remove stale label or this will be closed in 7 days. Commenting will instruct the bot to automatically remove the label.

github-actions[bot] avatar Sep 30 '22 00:09 github-actions[bot]