HttpClientFactory in ASP.NET Core 2.1 (Part 5): Logging Exploring the default request and response logging and how to replace the logging implementation

In the 2.1 release of IHttpClientFactory, the ASP.NET team included some built-in logging of the HTTP calls made via HttpClients created by the factory. This can be useful for the diagnosis of failures, as well as to understand the time taken to complete HTTP calls to external services. 

In this post, I want to explore what is available in the default logging, how we can control what gets logged, how the logging is implemented and finally, how we can replace the logging with our implementation.

There’s quite a bit of technical detail in this post. I hope it proves useful and interesting for those working with the IHttpClientFactory. Click here, if you want to jump to the section on customising the logging by replacing the default implementation.

What’s in the logs?

IHttpClientFactory includes two levels of logging. At information level, the time taken to process and send the request is included. This can be useful for monitoring slow responding external services for example. Here’s an example of the console output when information level logging is enabled:

info: System.Net.Http.HttpClient.MyClient.LogicalHandler[100]
  Start processing HTTP request GET https://api.github.com/repos/aspnet/docs/branches

info: System.Net.Http.HttpClient.MyClient.ClientHandler[100]
  Sending HTTP request GET https://api.github.com/repos/aspnet/docs/branches

info: System.Net.Http.HttpClient.MyClient.ClientHandler[101]
  Received HTTP response after 682.9818ms - OK

info: System.Net.Http.HttpClient.MyClient.LogicalHandler[101]
  End processing HTTP request after 693.1094ms - OK

If you require a deeper level of detail regarding the requests, this is available at trace level. With trace level logging enabled, details about the request and response headers will also be included in the log messages. Here’s an example from a request with trace logging enabled:

info: System.Net.Http.HttpClient.MyClient.LogicalHandler[100]
 Start processing HTTP request GET https://api.github.com/repos/aspnet/docs/branches

trce: System.Net.Http.HttpClient.MyClient.LogicalHandler[102]
  Request Headers:
  Accept: application/vnd.github.v3+json
  User-Agent: HttpClientFactory-Sample

info: System.Net.Http.HttpClient.MyClient.ClientHandler[100]
  Sending HTTP request GET https://api.github.com/repos/aspnet/docs/branches

trce: System.Net.Http.HttpClient.MyClient.ClientHandler[102]
  Request Headers:
  Accept: application/vnd.github.v3+json
  User-Agent: HttpClientFactory-Sample

info: System.Net.Http.HttpClient.MyClient.ClientHandler[101]
  Received HTTP response after 795.6954ms - OK

trce: System.Net.Http.HttpClient.MyClient.ClientHandler[103]
  Response Headers:
  Server: GitHub.com
  Date: Sun, 08 Jul 2018 09:44:09 GMT
  Status: 200 OK
  X-RateLimit-Limit: 60
  X-RateLimit-Remaining: 58
  X-RateLimit-Reset: 1531046594
  Cache-Control: public, max-age=60, s-maxage=60
  Vary: Accept
  ETag: "f0452653b55e5fef139a58372e3a7bf3"
  X-GitHub-Media-Type: github.v3; format=json
  Access-Control-Expose-Headers: ETag, Link, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval
  Access-Control-Allow-Origin: *
  Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
  X-Frame-Options: deny
  X-Content-Type-Options: nosniff
  X-XSS-Protection: 1; mode=block
  Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
  Content-Security-Policy: default-src 'none'
  X-Runtime-rack: 0.029792
  X-GitHub-Request-Id: DCD6:3C9D:688D222:D064A9D:5B41DCE9
  Content-Type: application/json; charset=utf-8
  Content-Length: 2642

info: System.Net.Http.HttpClient.MyClient.LogicalHandler[101]
  End processing HTTP request after 818.4525ms - OK

trce: System.Net.Http.HttpClient.MyClient.LogicalHandler[103]
  Response Headers:
  Server: GitHub.com
  Date: Sun, 08 Jul 2018 09:44:09 GMT
  Status: 200 OK
  X-RateLimit-Limit: 60
  X-RateLimit-Remaining: 58
  X-RateLimit-Reset: 1531046594
  Cache-Control: public, max-age=60, s-maxage=60
  Vary: Accept
  ETag: "f0452653b55e5fef139a58372e3a7bf3"
  X-GitHub-Media-Type: github.v3; format=json
  Access-Control-Expose-Headers: ETag, Link, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval
  Access-Control-Allow-Origin: *
  Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
  X-Frame-Options: deny
  X-Content-Type-Options: nosniff
  X-XSS-Protection: 1; mode=block
  Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
  Content-Security-Policy: default-src 'none'
  X-Runtime-rack: 0.029792
  X-GitHub-Request-Id: DCD6:3C9D:688D222:D064A9D:5B41DCE9
  Content-Type: application/json; charset=utf-8
  Content-Length: 2642

As you can see and hopefully would expect, at trace level, the details are quite verbose. This can be useful to gain a more complete understanding of the headers during development. It is not recommended that you enable this in production since it will not only quickly fill logs, but it may expose secure data such as authorisation tokens for example.

Each log message includes an event ID so that you can quickly filter out the events you are interested in. There are two loggers in use by default. There is an outer “LogicalHandler” logger which wraps the entire handler pipeline. This will allow for the timing of the entire pipeline to be included in the logs. Additionally, as this will run first in the pipeline, it will log the request headers as they appear before the request passes through any other handlers. The other handlers in the pipeline may modify those headers. Using the trace level logging it’s possible to capture that information to your logs which can be useful if you need to diagnose failures.

The inner logger has the suffix, “ClientHandler” in the category name. This will be the innermost handler and therefore be the last custom handler to run before the request is sent over the network. As a result, this will be able to record the final request headers as they appear before the request is sent over the wire.

For reference, the event IDs included by these loggers are as follows:

Outer “LogicalHandler”

100 RequestPipelineStart
101 RequestPipelineEnd
102 RequestPipelineRequestHeader
103 RequestPipelineResponseHeader

Inner “ClientHandler”

100 RequestStart
101 RequestEnd
102 RequestHeader
103 ResponseHeader

How does IHttpClientFactory logging work?

The logging in IHttpClientFactory is applied to the pipeline just before HttpMessageHandlerBuilder.Build() method is called to return the final HttpMessageHandler pipeline.

This is achieved with the help of HttpMessageHandlerBuilder filters which are applied by the DefaultHttpClientFactory implementation. An interface named IHttpMessageHandlerBuilderFilter exists which can be implemented in order to define filters. By default, there is one implementation of this interface called LoggingHttpMessageHandlerBuilderFilter which is registered within DI. It’s possible to record more than one implementation against the interface. As long as the implementations are registered in DI, each one will be executed when building the pipeline.

The code in the LoggingHttpMessageHandlerBuilderFilter implementation of the Configure method is responsible for creating the two loggers and passing them to logging handlers, which themselves are implementations of the DelegatingHandler abstract base class. The code for LoggingHttpMessageHandlerBuilderFilter.Configure method is as follows:

When the delegate chain is called, the next(builder) call (line 11) will execute the next delegate (an Action<HttpMessageHandlerBuilder>) to ensure the entire handler pipeline is configured.

Then, the two loggers are created, using the name from the builder. This will be the name given to the named client or the type of the typed client.

The outer handler is added at index 0 to the AdditionalHandlers list so that it surrounds all other handlers and is the first to execute. The inner handler is added to the end of the AdditionalHandlers list, so it will be the last to execute before the internal handlers responsible for making the HTTP request.

Each of these logging handlers is responsible for logging their messages before and after the SendAsync calls to the other handlers. Using this approach a timer can be started before the SendAsync call and used afterwards to record the total HTTP request time as well as the overall handler pipeline execution time. I won’t copy the code for those handlers here as they are quite long. Instead, if you are interested you can view them on GitHub.

The outer “LogicalHandler” source can be viewed at here.

This class creates a logging scope as well as recording the log messages. Optionally, if Trace logging is enabled, it will iterate over the request and response headers, recording those to the logger also. The pattern used in this class is an example of the LoggerMessage approach to provide caching of the logging delegates for better performance. You can read more about this approach in the official documentation at docs.microsoft.com. It’s a little outside the scope of this post to go any deeper here.

The inner “ClientHandler” logger uses a very similar approach to record its log messages and the source for that can be viewed here.

Configuring the logging output

As with all logging which uses the Microsoft.Extensions.Logging library, you can control the log messages that are generated using configuration.

In the appsettings.json file, you can control and filter the logging which is recorded. The default production JSON file looks like this:

In this configuration, only warning messages and higher will be logged and as a result, no logging from the IHttpClientFactory handlers will be included. To enable logging we can add an extra log level configuration setting. If you read the “How does it work” section above you’ll recall that the loggers which are used to log the HTTP request log messages are defined with the category name beginning with “System.Net.Http”.

An option, therefore, is to enable the Information or Trace log levels for the System namespace:

However, this will also enable informational messages from all other components in the System root namespace. Therefore it may be better to configure the logging by limiting the configuration to “System.Net.Http.HttpClient” so that you only add in the messages from the HTTP requests through clients created via the HttpClientFactory:

We can take this filtering a step further and filter down to a specific named or typed client. For a typed client, the name will make the name of the registered type.

Let’s imagine we are only interested in logging the requests via a typed client named MyClient. Also, perhaps we only want the raw timing of the HTTP request itself. In this example we can enable logging just for the ClientHandler of our MyClient:

Customising the log messages

There may be cases where you want to add additional logging around HTTP requests through the clients managed by IHttpClientFactory. An option, in this case, is to introduce an extra handler into the pipeline. In part 3 of this series, I explored adding additional outgoing middleware handlers to your client configuration. Using that handler you can inspect the requests and responses, logging any data as necessary.

If you want to replace the default logging entirely to fully customise the message output, the recommended approach from the team is to replace the default implementation of the IHttpMessageHandlerBuilderFilter interface. In fact, this section of the post was inspired by an issue on the IHttpClientFactory GitHub repository. Let’s take a look at how we replace the logging so that we can record a correlation ID into the console messages. We want to replace the default implementation here since we don’t want additional log messages.

First, we’ll need to create a new implementation of IHttpMessageHandlerBuilderFilter:

I’ve pretty much copied the default implementation for this filter. The main difference is that for simplicity I’m only using one outer logger for this example. We create a logger and then add in a new CustomLoggingScopeHttpMessageHandler to the handler pipeline.

The CustomLoggingScopeHttpMessageHandler class is as follows:

There’s quite a lot to this class, but most of that is the static Log class and its methods. I won’t go into those too deeply here since they following the LoggerMessage advice for more performant logging that you can read in the docs. For the most part, I took the exiting LoggingScopeHttpMessageHandler method and tweaked it for my needs.

The first point to focus on here is that whole operation is wrapped in a logging scope. Before and after the SendAsync method is called on the base, we use the static Log methods to record the log events (lines 20 and 22)

Within the Log class, a few private delegates are defined to format the expected log messages.

Both _beginRequestPipelineScope and _requestPipelineStart accept a string which will be the correlation ID. They use the value to record the correlation ID into the scope properties as well as on the request started message.

A simple helper method has been added which parses a HTTP request for the expected correlation ID header and if present, returns it. The BeginRequestPipelineScope and RequestPipelineStart both use this method to extract the correlation ID.

The final step now that we have our filter implementation is to register it in DI, replacing the existing default filter applied by the HttpClientFactory library.

Inside the Startup class, ConfigureServices method we can call the replace extension on the ServiceCollection to swap out the default implementation with our one:

The replace method will find the first registered service of IHttpMessageHandlerBuilderFilter and replace that registration with this new one, where our CustomLoggingFilter is the implementation.

Now, when we run the application, the console logs include our correlation ID:

info: System.Net.Http.HttpClient.MyClient.LogicalHandler[100]
      Start processing HTTP request GET https://api.github.com/repos/aspnet/docs/branches [Correlation: 447c8d6b-e280-4538-bd31-56d508266b5b]

info: System.Net.Http.HttpClient.MyClient.LogicalHandler[101]
      End processing HTTP request - OK

As a side note, this filter approach is a great way to add common cross-cutting concerns for your whole application. It’s possible to register additional filters, each of which could add their own common handlers onto all clients created via HttpClientFactory.

Summary

In this post, we looked at the type of information available to us from the built-in logging, included as part of the HttpClientFactory library. We looked at how we can use log configuration to control which log messages we see and also looked at how logging has been implemented within the library. Finally, we explored using the IHttpMessageHandlerBuilderFilter interface to replace the default logging filter. I hope this has been helpful. I’ll be keeping an eye on the progress for the 2.2 release where we may see more logging, including some Polly integration making its way into the product.

Other Posts in this Series

Part 1 – An introduction to HttpClientFactory
Part 2 – Defining Named and Typed Clients
Part 3 – Outgoing request middleware with handlers
Part 4 – Integrating with Polly for transient fault handling
Part 5 – This post

HttpClientFactory in ASP.NET Core 2.1 (Part 4) Integrating with Polly for transient fault handling

In the previous post in this series, I introduced the concept of outgoing middleware using DelegatingHandlers registered with named and typed clients. While that approach is available, the ASP.NET team hope that for most scenarios, we won’t need to resort to manually building our own handlers. In some cases, the built-in features of the library may provide the functionality we need. For example, it is sometimes useful to wrap requests within timing code to track how long they take to execute. This is now built into IHttpClientFactory as part of its default logging. In other cases, third-party integration may provide the functionality you require. For example, a common cross-cutting concern is handling transient faults during HTTP requests. In this case, rather than crafting our own retry logic, it’s much better to use a library such as Polly.

Polly is a popular transient fault handling library which provides a mechanism to define policies which can be applied when certain failures occur. One of the more commonly used policies is the retry policy. This allows you to wrap some code which, should a failure occur, will be retried; multiple times if necessary. This is very useful in situations where your application needs to communicate with external services. There is the ever-present risk when communicating with services over a transport such as HTTP that a transient fault will occur. A transient fault may prevent your request from being completed but is also likely to be a temporary problem. This makes retrying a sensible option in those cases.

As well as retries, Polly offers a number of other types of policy, many of which you may want to combine with retry to build up sophisticated ways to deal with failures. I will cover a few of the more general examples in this post, but if you want more comprehensive coverage I recommend you check out the Polly wiki.

The ASP.NET team have worked closely with Dylan and Joel, the primary maintainers of Polly, to include an integration pattern to make applying Polly policies to HttpClient instances really straightforward.

Before we can work with the Polly integrations we need to add a package reference to our project. The general IHttpClientFactory functionality lives inside the Microsoft.Extensions.Http package which is included as a dependency in the Microsoft.AspNetCore.App 2.1 meta package. This is a new meta package in ASP.NET Core 2.1 which doesn’t include third-party dependencies. Therefore, in order to use the Polly extensions for IHttpClientFactory we need to add the Microsoft.Extensions.Http.Polly package to our project.

After doing so in a basic project the csproj file will look something like this:

Applying a Policy

The Microsoft.Extensions.Http.Polly package includes an extension method called AddPolicyHandler on the IHttpClientBuilder that we can use to add a handler which will wrap all requests made using an instance of that client in a Polly policy. The IHttpClientBuilder is returned when we define a named or typed client.

We can then use the extensions in our ConfigureServices method…

In this example, we’re defining a client named “github” and we’ve used the AddPolicyHandler method to pass in a timeout policy. The policy you provide here must be an IAsyncPolicy<HttpResponseMessage>. This policy will timeout any requests after 10 seconds.

Reusing Policies

When using Polly, where possible, it is a good practice to define policies once and share them in cases where the same policy should be applied. This way, to change the rules for a policy, those changes only need to be made in one place. Also, it ensures that the policy is allocated only once. Certainly, policies such as the circuit breaker need to be shared if multiple callers expect to run through the same circuit breaker instance. 

For this example, we’ll declare the timeout policy from the last example once and share it with two named clients…

We’ll look at another option for policy reuse a little later in this post when we explore using a PolicyRegistry.

Transient Fault Handling

When dealing with HTTP requests, the most common scenarios we want to handle are transient faults. As this is a common requirement, the Microsoft.Extensions.Http.Polly package includes a specific extension that we can use to quickly setup policies that handle transient faults.

For example, to add a basic retry when a transient fault occurs for requests from a named client we can register the retry policy as follows:

In this case, all requests made through the client will retry when certain failure conditions are met. The AddTransientHttpErrorPolicy method takes a Func<PolicyBuilder<HttpResponseMessage>, IAsyncPolicy<HttpResponseMessage>>. The PolicyBuilder here will be preconfigured to handle HttpRequestExceptions, any responses returning a 5xx status code and also any responses with a 408 (request timeout) status code. This should be suitable for many situations. If you require the policy to apply under other conditions, you will need to use a different overload to pass in a more specific policy.

Be aware; when performing retries we need to consider idempotency. Retrying a HTTP GET is a pretty safe operation. If we’ve made the call and not received any response, we can safely retry the call without any danger. However, consider what might happen if we retry a HTTP POST request. In that case, we have to be more careful since it’s possible that your original request was actually received, but the response we received suggested a failure. In that case, retrying could lead to duplication of data, or corruption of the data stored in the downstream system. Here, you need to have more knowledge of what the downstream service will do if it receives the same request more than once. Is retrying a safe operation? When you own the downstream service, it is easier to control this. You might, for example, use some unique identifier to prevent duplicate POSTs.

When you have less control of the downstream system or you know that a duplicate POST might have negative consequences, you will need to control your policy more carefully. An option that might be suitable is to define different named/typed clients. You could create one for those requests that have no side effects and another for those that do. You can then use the correct client for the action being taken. However, this might become a little difficult to manage. A better option is to use an overload of AddPolicyHandler which gives us access to the HttpRequestMessage so that policies can be applied conditionally. That overload looks like this:

AddPolicyHandler(Func<HttpRequestMessage, IAsyncPolicy<HttpResponseMessage>> policySelector)

You’ll note that the policySelector delegate here has access to the HttpRequestMessage and is expected to return an IAsyncPolicy<HttpResponseMessage>. We don’t have access to a PolicyBuilder setup to handle transient faults as we did in our earlier example. If we want to handle the common transient errors, we’ll need to define the expected conditions for our policy. To make this easier, the Polly project includes a helper extension that we can use that sets up a PolicyBuilder ready to handle the common transient errors. To use the extension method we need to add the Polly.Extensions.Http package from Nuget.

We can then call HttpPolicyExtensions.HandleTranisentHttpError() to get a PolicyBuilder that is configured with the transient fault conditions. We can use that PolicyBuilder to create a suitable retry policy which can then be conditionally applied when the request is a HTTP GET. In this example, any other HTTP methods use the NoOp policy.

Using a PolicyRegistry

The last example I want to cover in this post is a basic demonstration of how policies can be applied from a policy registry. To support policy reuse, Polly provides the concept of a PolicyRegistry which is essentially a container for policies. These can be defined at application startup by adding policies into the registry. The registry can then be passed around and used to access the policies by name.

The extensions available on the IHttpClientBuilder also support adding Polly based handlers to a client using a registry.

First, we must register a PolicyRegistry with DI. The Microsoft.Extensions.Http.Polly package includes some extension methods to make this simple. In the above example, I call the AddPolicyRegistry method which is an extension on the IServiceCollection. This will create a new PolicyRegistry and add register it in DI as the implementation for IPolicyRegistry<string> and IReadOnlyPolicyRegistry<string>. The method returns the policy so that we have access to add policies to it.

In this example, we’ve added two timeout policies and given them names. Now when registering a client we can call the AddPolicyHandlerFromRegistry method available on the IHttpClientBuilder. This takes the name of the policy we want to use. When the factory creates instances of this named client, it will add the appropriate handler, wrapping calls in the “regular” retry policy which will be retrieved from the registry.

Summary

As a long time user of Polly, I’m very happy to see the integration being added with IHttpClientFactory. Together these libraries make it really easy to get up and running with HttpClient instances that are able to handle transient faults seamlessly. The examples I’ve shown are quite basic and general, but I hope they give the idea of how policies can be used and registered. For more detailed Polly documentation and examples, I recommend you check out the Polly wiki. It was great being involved in some of the early discussions with both the ASP.NET and Polly teams when this integration was being designed as I was able to suggest the usefulness of the policy registry extensions.

Other Posts in this Series

Part 1 – An introduction to HttpClientFactory
Part 2 – Defining Named and Typed Clients
Part 3 – Outgoing request middleware with handlers
Part 4 – This post

Contributing to the Microsoft ASP.NET Documentation My experience of writing for docs.microsoft.com

Back in February I spotted an issue on the ASP.NET Core Docs repository. The issue was a requirement for new documentation about the IHttpClientFactory feature being added in ASP.NET Core 2.1.

I’d been following the work on IHttpClientFactory for a while and had written a couple of posts about the functionality based on the nightly builds. I’d been keen to try my hand at helping with the docs.microsoft.com site for a while and this seemed like a great chance to contribute. In this post I’ll wanted to share my experience.

What is docs.microsoft.com?

Before I go further I should explain what docs.microsoft.com is. It’s Microsoft’s new (introduced in 2016) documentation portal, built from the ground up as a replacement for a number of prior documentation sites such as TechNet and MSDN. I believe it started with documentation for .NET Core / ASP.NET Core and has quickly grown to include documentation for many other Microsoft products and services.

One of the exciting and interesting changes is that many of the product teams have made their documentation open source. This allows members of the community to submit content and correct any mistakes they find. With more rapidly evolving products, keeping up is a real challenge. Allowing the community to assist is a great move and I believe has really improved the quality and usefulness of the content.

Additionally, an amazing team of managers and content writers has been assembled at Microsoft to build and maintain the improved documentation.

Getting Involved

So, back to my experience!

The first step was to show an interest and reach out to offer my contribution. I did this by commenting on the GitHub issue for the feature.

Scott Addie, a senior content developer at Microsoft who works on the ASP.NET docs team quickly responded to take up my offer. At this point we started to outline the content in collaboration with Ryan and Glenn from the ASP.NET team. You can follow that part of the story in the issue comments.

Once the plan and outline was agreed I was free to start work on the content.

Documentation

To get started, all I needed to do was to fork and clone the repository on GitHub. If those concepts are new to you, check out my YouTube playlist for some introductory videos on how that process works.

With the docs content on my machine I started a branch and begin outlining some content. All of the documentation is written in markdown. I prefer to use Visual Studio Code for markdown editing so I opened it up and created a document.

I began in the same was as I do for my blog posts, by outlining some initial sections and jotting down some notes. I like to organise my plan early on as it allows me to figure out a flow for the content and how I will build up information. Once the outline was in place I started to flesh out content. Often this required me to research the feature by trying out samples and reading the source code. Writing documentation (and blog posts) is a great way to learn more about a product or feature as you really look more closely than you might normally.

I made an initial pull request to the docs repository so that I could show the team my work and get feedback as I went. Being new to contributing I wanted to make sure I was on the right track. Scott was quickly on the case and offered some valuable feedback.

One of the big differences between writing for my blog and writing for the documentation is the style. In my blog I’m often referring to my personal experience and sharing with readers in a conversational style. For the docs the style is less personal and more concise. When reading documentation, users are often looking to get going quickly so producing tight, clear content is important. At first I found the switch in styles a little difficult, but after a while I was having to correct myself less often.

From there the work continued whenever I could get time to spend on the documentation. I was making contributions in the evenings, weekends and during my lunch breaks at work.

Building a Sample Application

One of the other requirements for good documentation is a sample application. These are often made available in the docs repository and are intended to provide a quick start for developers. Scott asked if it would be possible for me to produce the sample for the feature. The sample would also be used to support code snippets in the documentation. This is a really clever part of the docfx functionality which allows code snippets to be referenced using a special syntax in the markdown file. The code from the sample is then pulled into the documentation by a build process and stays up to date as the sample changes. The syntax even supports highlighting specific lines in the code for clarity.

Building the sample was a little complicated. We were just into the preview 1 of ASP.NET Core 2.1 at this stage, but some of the features of IHttpClientFactory hadn’t made it into that first preview. I was using nightly builds to experiment with the latest available functionality, but getting this working in a sample application proved complicated. I needed to match up the nightly SDK and ASP.NET Core builds to get things building and at times it would simply not build at all!

In the end we agreed that I would wait for the preview 2 release before putting too much time into the sample. Once preview 2 landed I was able to pick it up and build out the code.

Working out how to demonstrate things simply and accurately, but using code that would be suitable if coped and pasted was a careful balance. IHttpClientFactory is made up of a lot of extension method calls to configure the client instances. I found that before long I had a long list for the various options in my Startup class. Breaking these into snippet regions meant I could include the relevant samples in the documentation and break up the method a little.

During my trip to Seattle for my first ever MVP summit I was lucky enough to meeting Ryan Nowak, Glenn Condron and Damian Edwards from the ASP.NET team to chat about the feature and the documentation. During the MVP hack day I spent a bit of time refining the code sample and even got a live code review on one part of it from Glenn and Damian! Thanks guys!

Review and Refinement

Over a period of about 2 months I worked on the documentation and the sample application. It took longer than I expected mostly due to keeping up with the changes and fitting it into my free time. Finally though, this week I was pretty much done and ready for a review by the team.

A few changes were suggested including fixing up some grammar and spelling as well as some feedback on preferred styles and idioms for the sample. It was very valuable feedback and has turned my content into what I hope is some useful documentation.

You can follow the history and comments on the pull request.

Summary

I want to conclude this post with a huge thank you to the team at Microsoft. I was little nervous when I first started out, but was quickly put at ease by everyone I worked with. Scott in particular was very patient and helpful. Luke Latham also offered some great feedback to tidy up the content and language of the documentation.

Glenn and Ryan helped with some questions about the IHttpClientFactory feature as well as offering advice for the sample.

Thanks also goes out to Dylan from the App vNext team who maintain Polly who helped with clarifying some of the Polly functionality and offered some great last minute suggestions to tidy up a few sentences. Having so many people cast an eye over the content really helped tune it and make it clear and concise.

I’ve really enjoyed the collaboration and I have personally learned a lot along the way. It’s a great feeling to have contributed to something which I hope will help many developers start using IHttpClientFactory in their code.

If you’re thinking that this journey sounds fun, I recommend you check out the contributing guide and take a look for an issue you can help with. Even small fixes and corrections are most welcome and the team are a great bunch of people to work with.

HttpClientFactory in ASP.NET Core 2.1 (Part 3) Outgoing request middleware with handlers.

In my previous posts in this series (An Introduction to HttpClientFactory and Defining Named and Typed Clients) I introduced some core concepts and then showed some examples of using the new IHttpClientFactory feature in ASP.NET Core 2.1. It’s been a while since those first two posts but I’d like to continue this series by looking at the concept of outgoing request middleware with handlers.

IMPORTANT NOTE: the features shown here require the current preview build of the SDK and the .NET Core and ASP.NET Core libraries. I won’t cover how to get those in this post. At the time of writing we’re in preview 2 of .NET Core and ASP.NET Core 2.1. This preview should be reasonably feature complete but things may still change. If you want to try this out today you can get the preview 2 installers but I recommend waiting until at least the RC before producing any production code.

DelegatingHandlers

To be clear from the outset; many of the pieces involved in this part of the feature have existed for a long time. HttpClientFactory simply makes the consumption of these building blocks easier, through a more composable and clear API.

When making HTTP requests, there are often cross cutting concerns that you may want to apply to all requests through a given HttpClient. This includes things such as handling errors by retrying failed requests, logging diagnostic information or perhaps implementing a caching layer to reduce the number of HTTP calls on heavily used flows.

For those familiar with ASP.NET Core, you will also likely be familiar with the middleware concept. DelegatingHandlers offer an almost identical concept but in reverse; when making outgoing requests.

You can define a chain of handlers as a pipeline, which will all have the chance to process an outgoing HTTP request before it is sent. These handlers may choose to modify headers programmatically, inspect the body of the request or perhaps log some information about the request.

The HttpRequestMessage flows through each handler in turn under it reaches the final inner handler. This handler is what will actually dispatch the HTTP request across the wire. This inner handler will also be the first to receive the response. At this point that response passes back through the pieline of handlers in the reverse order. Again, each handler can inspect, modify or use the response as necessary. Perhaps for certain request paths you want to apply caching of the returned data for example.

 

IHttpClientFactory - DelegatingHandler outgoing middleware pipeline flow

In the diagram above you can see this pipeline visualised.

 

Much like ASP.NET Core middleware, it also possible for a handler to short-circuit the flow and return a response immediately. One example where this might be useful is to enforce certain rules you may have in place. For example, you could create a handler which checks if an API key header is present on outgoing requests. If this is missing, then it doesn’t pass the request along to the next handler (avoiding an actual HTTP call) and instead generates a failure response which it returns to the caller.

Before IHttpClientFactory and its extensions you would need to manually pass a handler instance (or chain of handlers) into the constructor for your HttpClient instance. That instance will then process any outgoing requests through the handlers it has been supplied.

With IHttpClientFactory we can more quickly apply one or more handlers by defining them when registering our named or typed clients. Now, anytime we get an instance of that named or typed client from the HttpClientFactory, it will be configured with the required handlers. The easier way to show this is with some code.

Creating a Handler

We’ll start by defining two handlers. In order to keep this code simple these aren’t going to be particularly realistic in terms of function. They will however show the key concepts. As we’ll see in future posts, there are ways to achieve similar results without having to write our own handlers.

To create a handler we can simply create a class which inherits from the DelegatingHandler abstract class. We can the override the SendAsync method to add our own functionality.

In our example this will be our outer request. A StopWatch will be started before calling and awaiting the base handler’s SendAsync method which will return a HttpResponseMessage. At this point the external request has completed. We can log out the total time taken for the request to flow through any other handlers, out to the endpoint over HTTP and for the response to be received.

Just to keep things interesting let’s create a second handler. This one will check for the existence of a header and if it is missing, will return an immediate response, short-circuiting the handler pipeline and avoiding an unnecessary HTTP call.

Registering Handlers

Now that we have created the handlers we wish to use, the final step is to register them with the dependency injection container and define a client. We perform this work in the ConfigureServices method of the Startup class.

The first two lines register each handler with the service collection which will be used to build the final service provider. These need to be transient so that a new instance is provided each time a new HttpClient is created.

Next, we define a client. In this example I’m using a named client for simplicity. Check out my previous post in this series for more detail about named and typed clients. The AddHttpClient method in this case returns an IHttpClientBuilder. We can call additional extension methods on this builder. Here we are calling the generic AddHttpMessageHandler method. This method takes the type for the handler as its generic parameter.

The order of registration matters here. We start by registering the outer most handler. This handler will be the first to inspect the request and the last to see the response. In this case we want our timing handler to record complete time taken for the whole request flow, including time spent in any inner handlers, so we have added it first. We can call AddHttpMessageHandler again, this time with our ValidateHeaderHandler handler. This will be our final custom handler before the inner HttpClientHandler is passed the request to send the request over the network.

At this point we now have an outgoing middleware pipeline defined on our named ‘github’ client. When a request comes through this client it will first pass into the TimingHandler, then into the ValidateHeaderHandler. Assuming the header is found the request it will be passed on and sent out to the URI in the request. When the response comes back it first returns through the ValidateHeaderHandler which does nothing with the response. It then passes onto the TimingHandler where the total elapsed time is logged and then finally is returned to the calling code.

Summary

While I have shown how easy it is to create a DelegatingHandler and then add it to your HttpClient outgoing pipeline using the new extensions; the team hope that for most cases you will not find yourself needing to craft your own handlers. Common concerns such as logging are taken care of for you within IHttpClientFactory (we’ll look at logging in a future post). For more complex but common requirements such as retrying failed requests and caching responses a much better option is to use a third party library called Polly. The team at Microsoft have made a great decision integrate with Polly.

In my next post I’ll investigate the options for adding Polly based handlers with IHttpClientFactory. In the mean time I suggest you check out this post by Scott Hanselman where he covers the Polly extensions. You can also check out the Polly wiki for more information.

Other Posts in this Series

Part 1 – An introduction to HttpClientFactory
Part 2 – Defining Named and Typed Clients
Part 3 – This post
Part 4 – Integrating with Polly for transient fault handling

HttpClientFactory in ASP.NET Core 2.1 (Part 1) An Introduction to HttpClientFactory

TL;DR;

A new HttpClientFactory feature is coming in ASP.NET Core 2.1 which helps to solve some common problems that developers may run into when using HttpClient instances to make external web requests from their applications.

Introduction

This blog post has been in the works since mid-October 2017, which was when I first noticed the new HttpClientFactory repository appear on GitHub. I was intrigued by its appearance and wondered what the ASP.NET team were up to, so I went diving into the available code that the repo contained at the time. I’ve then kept an eye on it ever since, watching as the team evolved the feature by reading the commits, issues and pull request discussions.

Recently the feature has started to be talked about more openly and was included in a recent talk by Damian Edwards and David Fowler at NDC London. In fact on the day of writing this introduction it’s been shown on both Jeff Fritz’s livestream show and the ASP.NET Community Standup. The opinion of Ryan Nowak, one of the main ASP.NET developers for the feature, is that it’s reasonably stable to begin writing about it now.

NOTE: Please bear in mind that this post is written prior to the official preview release of .NET Core 2.1 by using the nightly builds of ASP.NET Core 2.1 and the .NET Core SDK. Therefore, things may change before and during the public previews (hopefully we’ll get these within the next month) and also before the final release of 2.1 based on feedback received from those previews.

What is HttpClientFactory?

In the words of the ASP.NET Team it is “an opinionated factory for creating HttpClient instances” and is a new feature coming with the release of ASP.NET Core 2.1. Depending on your past experience using HttpClient, you may or may not be aware of some of the pitfalls that can be encountered, sometimes without even being aware that you have a problem.

The first issue is when you create too many HttpClients within your code which can in turn create two problems…

  1. It’s inefficient as each one will have its own connection pool for the remote server. This means you pay the cost of reconnecting to that remote server for every client you create.
  2. The bigger problem you can have if you create a lot of them is that you can run into socket exhaustion where you have basically used up too many sockets too fast. There is a limit on how many sockets you can have open at one time. When you dispose of the HttpClient, the connection it had open remains open for up to 240 seconds in a TIME_WAIT state (in case any packets from the remote server still come through).

HttpClient implements IDisposable and this often leads developers to follow the normal pattern when using an IDisposable object, creating it within a using block. This ensures that the object is properly disposed of once you’re done with it and it has gone out of scope. If you want to read more about this, it is well documented by the ASP.NET Monsters in their post “You’re using HttpClient wrong and it’s destablizing your software”.

A preferred approach therefore it to reuse HttpClient instances so that connections can also be reused. HttpClient is a mutable object but as long as you are not mutating it, it is actually thread safe and can be shared. A common approach is therefore to register it as a singleton with a DI framework or to create a wrapper around it which holds a static instance.

However, this creates a new problem. Using a single HttpClient in this way will keep connections open and not respect the DNS Time To Live (TTL) setting. Now the connections will never get DNS updates so the server you are talking to will never have its address updated. This is entirely possible in some situations where you are balancing over many hosts that may go away over time or perhaps rolling out new services using blue/green deployments. If the server is gone, the IP your connection is using may no longer respond to requests that you make through the single HttpClient. You can read more about this issue at “Singleton HttpClient? Beware of this serious behaviour and how to fix it” and “Singleton HttpClient doesn’t respect DNS changes”.

HttpClientFactory is designed to help start solving these problems and provides a new mechanism to create HttpClient instances that are properly managed for us behind the scenes. It will “do the right thing” for us and we can focus on other things! While the above problems are mentioned in reference to HttpClient, in fact the source of the issues actually occurs on the HttpClientHandler, which is used by HttpClient. The HttpClientFactory manages the lifetime of the handlers so that we have a pool of them which can be reused, while also rotating them so that DNS doesn’t get stale.

The expensive part of using HttpClient is actually creating the HttpClientHandler and the connection. Having these pooled in this manner means we can get more efficient use of the connections on our system. When you use the HttpClientFactory to request a HttpClient, you do in fact get a new instance each time, which means we don’t have to worry about mutating it’s state. This HttpClient may (or may not) use an existing HttpClientHandler from the pool and therefore use an existing open connection.

By default, each new HttpClientHandler (which derives from HttpMessageHandler) will be created with an active lifetime of 2 minutes. This can be controlled on a per named client basis when creating it’s handler chain. Once the lifetime is reached, the handler will not be immediately be disposed of and will instead be placed into the expired pool. Any clients depending on the original handler chain can continue using it without any issues. There is a background job checking the expired pool to see if all references for the handler have gone out of scope, at which point it can then be disposed of. Any new requests for a new client once the handler chain has been expired will get a new handler chain.

This works reasonably well, but there are other things underway on the .NET Core side which might improve the situation further. The .NET Core team are working on a new ManagedHandler which should manage DNS more correctly and in principle can be kept around for longer, meaning connections can be shared even more efficiently. This new handler is also being designed to function more consistently across the different operating systems. Until that work is completed (which might be in the 2.1 time frame) the pooling of handlers above is a reasonable workaround.

How to use HttpClientFactory

IMPORTANT NOTE: The features and code samples shown here require the current nightly builds of the SDK and the .NET Core and ASP.NET Core libraries. I won’t cover how to get setup to use those in this post. Treat this as an early preview of how the feature will work so that you can begin planning why, where and how you will use it once 2.1 is publicly available. Unless you have an urgent need to try this out today, I’d recommend waiting until the 2.1 previews are released, hopefully within the next month or so.

In this post I’ll concentrate on one of the most basic ways to get started with the HttpClientFactory. For this example, we’ll start by creating a simple WebAPI project and then edit the csproj file to upgrade it to use the new .NET Core and ASP.NET Core 2.1 bits. First we need to set it to be based on netcoreapp2.1 (not yet in official preview) and then include two packages which we’ll need. For this post I’m pinning those to specific preview nightly build versions available on the ‘dev’ MyGet feeds. After doing this our project file looks like this:

Next we need to head over to our Startup.cs file and register a service. The HttpClientFactory includes various ServiceCollection extensions. The one we’ll use for this example is:

services.AddHttpClient();

Behind the scenes this will register a few required services, one of which will be an implementation of IHttpClientFactory. Next we’ll update the default ValuesController to make use of this feature:

Here we are first adding a dependency on IHttpClientFactory which will be injected into our controller by the DI system. The IHttpClientFactory allows us to ask for and receive a HttpClient instance.

Within our Get action we are then using the HttpClientFactory to create a client. Behind the scenes the HttpClientFactory will create a new HttpClient for us. But wait, didn’t I say earlier that using a new HttpClient for each request is bad? Indeed I did; but in fact that was a little bit of misdirection. The HttpClient itself is not really the problem, it’s the HttpClientHandler which it uses to make the HTTP calls that is the actual issue. It’s this which opens the connections to the external services that will then remain open and block sockets, even in the main HttpClient is disposed of.

HttpClientFactory pools these HttpClientHandler instances and manages their lifetime in order to solve some of the issues I mentioned earlier. Each time we ask for a HttpClient, we get a new instance, which may (or may not) use an existing HttpClientHandler. The HttpClient itself it not too heavy to construct so this is okay.

Once created the HttpClientHandlers are pooled and held around for around 2 minutes by default. This means that any new requests for CreateClient may share a handler and therefore the connections also. While a HttpClient lives, it’s handler will remain available and again this will share the connection.

After the two minutes, each HttpClientHandler is marked as expired. The expired state simply marks them so that they are no longer used when creating any new HttpClient instances. They are not immediately disposed however, as other HttpClient instances may be using them. The HttpClientFactory uses a background service which monitors the expired handlers and once they are no longer referenced, can then dispose of them properly, allowing their connections to be closed also.

This pooling feature helps reduce the risk of socket exhaustion and the refreshing process helps solve the DNS update problem by ensuring we don’t have long lived instances of HttpClientHandlers and connections hanging around. It’s a reasonable compromise which is managed for us by making use of the HttpClientFactory feature.

Summary

I’ll leave it there for this introductory post. In future posts I’ll dive into some of the more advanced ways we can use HttpClientFactory as there’s some nice features so show off. We’ll look at how we can create named HttpClient instances with configuration and also creating our own typed clients. This is where the feature will really begin to shine. Hopefully you’ll have seen, even in this basic example, how it improves use cases where you have a requirement to make HTTP calls in the most correct and efficient way. We don’t need to think about how we manage the lifetime of the clients or worry about running into DNS issues. I’m looking forward to using this in production once ASP.NET Core 2.1 is released.

Other Posts in this Series

Part 1 – This post
Part 2 – Defining Named and Typed Clients
Part 3 – The Outgoing Request Middleware Pipeline with Handlers
Part 4 – Integrating with Polly for transient fault handling