High-Performance Logging in .NET Core Header

High-Performance Logging in .NET Core

In this post I will explore and highlight an approach to logging which you can apply in high-performance, high load applications to reduce the overhead of logging.

A Refresher of Logging in .NET Core

Logging is a built in feature of ASP.NET Core and .NET Core Work Services. The feature is provided as part of the Microsoft.Extensions.Logging library. The most common usage is to accept an ILogger<T> in the constructor for classes, where T is usually the type of that class.

Logging is the achieved using the various extension methods. For example, to log a fixed string message as an information event, we can use:

_logger.LogInformation("Converted something");

Or to log an informational log message which includes a runtime value(s) we can use:

var aVariable = "Some string value";
_logger.LogInformation("Converted {Value}" , aVariable);

In this case, the params arguments are used to replace the values in the message when it is formatted.

In many situations, this approach to logging is perfectly reasonable. In cases where you are building applications which log extensively and need to be highly performant, there are some optimisations we can apply.

At work, we log from data processing microservices which respond to messages from a queue. We have some services which handle many millions of messages per day and in those cases, even limited logging may result in one or two log entries per queue message we process. Realistically, our hot paths will have many tens of potential logging calls. That may mean we call _logger.LogXYZ() many millions of times per day.

Although logging is pretty fast in regards to execution time, using the approaches shown above to log may incur some allocations. It’s possible to reduce that overhead by using a more performant pattern for logging.

High-Performance .NET Core Logging

Microsoft have already documented approaches to “High-performance logging with LoggerMessage in ASP.NET Core” which I apply for many of our high volume services.

In this post I will first describe how I implement that advice and also show the benchmarks between the standard approach and the more performant technique.

The code and the benchmarks used for this post can be found in my LoggingBenchmarks GitHub Repo.

I’ve included a few basic scenarios which will measure logging messages which take zero, one or two parameters.

Starting with Standard Logging

I have a class which applies traditional logging to log from four methods.

In this class, a basic ILogger instance is accepted in the constructor. To more easily support my benchmarking, this accepts the base ILogger interface type. For a real application, as shown above, typically you’d accept the ILogger<T>. The difference is not important here.

Each of my four methods calls an extension method on the ILogger. The first three call LogInformation and the last method calls LogDebug. We’ll use this to test how filtering of logging comes into play a little later on.

The only difference between the first three methods is that each logs using a different number of parameters.

Optimised Logging with LoggerMessage

Before we run the benchmarks, let me explain how I structure classes which will log using the optimised LoggerMessage approach. This is based on the patterns used in the internal ASP.NET Core code base.

I’ll present the entire class here and then we can discuss the key points.

The pattern used is often to define logging specific to a class by using two nested classes.

On line 23, we have an internal static class called “Events”. This defines in this case, one static EventId field. In an actual application, you may have many event IDs in use per class. EventIds are useful to apply an ID and an optional name to logged messages. This can be used in structured logging to filter for particular events.

On line 28, we have private static class called “Log”. Within this class, we’ll define LoggerMessage’s for any logging we plan to use from our main class.

Starting with the private static readonly fields, we use LoggerMessage.Define to create an Action delegate for logging a particular message. Define accepts up to six generic type arguments for situations where we need to accept parameters used when formatting the final message.

For each field, we provide the LogLevel that applies to the message, the EventId and the template (formatString) to use to produce the final message. We use the static “Started” EventId in all messages for this sample. In most cases you’ll define an EventId per message.

The key improvement here, vs. Standard logging, is that the template for the log message now only needs to be parsed once when this message is defined. In standard logging, that occurs for each call.

Below the private fields, we define public static methods which can be called from our main class. These accept an ILogger, plus any parameters which are needed for the message. These call the appropriate Action from the relevant field. I’ve used the expression body style here to fit these into one line.

The final method at line 58 includes an extra check on the ILogger to see if it is enabled for a particular log level before calling the Action. In cases where the particular logger is not logging Debug level messages we can skip the attempt to log entirely. We’ll see if this has any impact in the benchmarks shortly.

In the main class, the static methods on the Log class are called as required (see line 15 for example), passing the current ILogger for the class and any parameter values. This has the nice effect of reducing the logging noise  since we don’t see long log messages within the main code. We just see that we will log a particular type of log message.

This same pattern can then be applied across many classes in your application.

Benchmark Results

I won’t show the code that executes the benchmarks here. Remember, if you want to explore and run these benchmarks yourself, the full code can be found in my LoggerBenchmarks GitHub Repo.

You may also want to check out my post “Introduction to Benchmarking C# Code with BenchmarkDotNet“, to learn more about benchmarking in general.

Logging with Zero Parameters

Lets start by comparing a log messages with no parameters for the two approaches. Here are the results.

We can see that in both cases, no allocations occur as we have no parameters to apply. In both cases it’s mere nanoseconds, but we can see that the optimised LoggerMessage approach has a 4.7x reduction in execution time.

Logging with One Parameter

We’ll now add one parameter into the mix.

In this case, we start to see a reduction in allocations. With a single parameter there were 32-Bytes allocated for the single logging call. We see no such allocation for the LoggerMessage approach. Though only a small allocation cost, this is a short lived heap object which will need collecting in the future. For a single message this is negligible, but at scale these allocations mount up.

We also see that the standard logging approach takes over 3x longer than it did before while our optimised approach executes in about the same time. Overall, we see a 14x improvement between the two approaches.

Logging with Two Parameters

When we apply two parameters the allocations are higher again for the standard logging approach, doubling to 64-Bytes. The execution time is only slightly increased which is possibly noise between my benchmark runs.

Filtered Logging

The final benchmark I included was the case where we call the log to log a debug level message. In the program, this log level has been filtered out so would not actually be sent to configured log sinks.

Here I’ve run three benchmarks. The first used the standard approach to logging. This message uses two parameters so our allocations an execution time are on par with the previous example.

The second benchmark uses the optimised flow as before and shows the same results again.

The third benchmark uses the method which did the pre-check on whether the debug level was enabled and skipped the call to the LoggerMessage Action if not enabled. This resulted in a minor 1.5ns reduction of execution time in my testing. Unlikely to be enough to matter in 99% of cases!

You may assume that some of the log messages are not actually logged, because they are filtered out from the log output due to the LogLevel configuration, and so can be excluded from consideration. However, it appears that even these incur a cost based on my testing. With the LoggerMessage flow, it’s possible to optimise around the log calls based on whether logging is enabled or not.

In this test, DebugLogging was filtered out for my application, but the calls to _logger.LogDebug(“Blah {Param}”, paramValue) still have an impact as they end up being evaluated before being filtered – which is kind of surprising. I need to dig into the source to understand why that is and if my benchmarks are unfairly testing this.

Scaling Up the Results

In general cases these small, short-lived heap allocations may not matter very much, but it soon adds up. For multiple services I maintain, they each process at least 18 million messages per day.

For each message, we may call out to a logger with at least 10 logging calls in the processing path of each message. In reality, this is a low estimate and it’s at probably at least double that. Assume for this example that each logging call includes one parameter.

This means using the optimised approach saves between approximately 5.76GB to 11.52GB of heap allocations per day. The execution time difference is negligible (21 seconds spent logging per day vs. 1 second in the optimised approach). However, the reduction in the frequency of gen-0 garbage collections should help improve the general throughput of the service as less CPU time will be spent on GC.

Summary

In general, most applications won’t need to care about using the LoggerMessage approach as the gains generally apply only at extremely high volumes. The execution time of even the standard logging is super fast so likely won’t be an issue even at high scale.

However, if you have apps which process huge volumes of requests or data, you may be able to reduce your heap allocations by predefining you log messages.

I find the pattern also produces cleaner code, with logging concerns moved to the bottom of classes. In the main code paths, logging messages are less verbose and introduce less noise to the application logic. Even if the performance advantages are not important to you, you may find that cleanliness makes the pattern worthwhile.


Have you enjoyed this post and found it useful? If so, please consider supporting me:

Buy me a coffeeBuy me a coffee Donate with PayPal

Steve Gordon

Steve Gordon is a Pluralsight author, 6x Microsoft MVP, and a .NET engineer at Elastic where he maintains the .NET APM agent and related libraries. Steve is passionate about community and all things .NET related, having worked with ASP.NET for over 21 years. Steve enjoys sharing his knowledge through his blog, in videos and by presenting talks at user groups and conferences. Steve is excited to participate in the active .NET community and founded .NET South East, a .NET Meetup group based in Brighton. He enjoys contributing to and maintaining OSS projects. You can find Steve on most social media platforms as @stevejgordon

Leave a Reply

Your email address will not be published. Required fields are marked *