Contributing to the Microsoft ASP.NET Documentation My experience of writing for docs.microsoft.com

Back in February I spotted an issue on the ASP.NET Core Docs repository. The issue was a requirement for new documentation about the IHttpClientFactory feature being added in ASP.NET Core 2.1.

I’d been following the work on IHttpClientFactory for a while and had written a couple of posts about the functionality based on the nightly builds. I’d been keen to try my hand at helping with the docs.microsoft.com site for a while and this seemed like a great chance to contribute. In this post I’ll wanted to share my experience.

What is docs.microsoft.com?

Before I go further I should explain what docs.microsoft.com is. It’s Microsoft’s new (introduced in 2016) documentation portal, built from the ground up as a replacement for a number of prior documentation sites such as TechNet and MSDN. I believe it started with documentation for .NET Core / ASP.NET Core and has quickly grown to include documentation for many other Microsoft products and services.

One of the exciting and interesting changes is that many of the product teams have made their documentation open source. This allows members of the community to submit content and correct any mistakes they find. With more rapidly evolving products, keeping up is a real challenge. Allowing the community to assist is a great move and I believe has really improved the quality and usefulness of the content.

Additionally, an amazing team of managers and content writers has been assembled at Microsoft to build and maintain the improved documentation.

Getting Involved

So, back to my experience!

The first step was to show an interest and reach out to offer my contribution. I did this by commenting on the GitHub issue for the feature.

Scott Addie, a senior content developer at Microsoft who works on the ASP.NET docs team quickly responded to take up my offer. At this point we started to outline the content in collaboration with Ryan and Glenn from the ASP.NET team. You can follow that part of the story in the issue comments.

Once the plan and outline was agreed I was free to start work on the content.

Documentation

To get started, all I needed to do was to fork and clone the repository on GitHub. If those concepts are new to you, check out my YouTube playlist for some introductory videos on how that process works.

With the docs content on my machine I started a branch and begin outlining some content. All of the documentation is written in markdown. I prefer to use Visual Studio Code for markdown editing so I opened it up and created a document.

I began in the same was as I do for my blog posts, by outlining some initial sections and jotting down some notes. I like to organise my plan early on as it allows me to figure out a flow for the content and how I will build up information. Once the outline was in place I started to flesh out content. Often this required me to research the feature by trying out samples and reading the source code. Writing documentation (and blog posts) is a great way to learn more about a product or feature as you really look more closely than you might normally.

I made an initial pull request to the docs repository so that I could show the team my work and get feedback as I went. Being new to contributing I wanted to make sure I was on the right track. Scott was quickly on the case and offered some valuable feedback.

One of the big differences between writing for my blog and writing for the documentation is the style. In my blog I’m often referring to my personal experience and sharing with readers in a conversational style. For the docs the style is less personal and more concise. When reading documentation, users are often looking to get going quickly so producing tight, clear content is important. At first I found the switch in styles a little difficult, but after a while I was having to correct myself less often.

From there the work continued whenever I could get time to spend on the documentation. I was making contributions in the evenings, weekends and during my lunch breaks at work.

Building a Sample Application

One of the other requirements for good documentation is a sample application. These are often made available in the docs repository and are intended to provide a quick start for developers. Scott asked if it would be possible for me to produce the sample for the feature. The sample would also be used to support code snippets in the documentation. This is a really clever part of the docfx functionality which allows code snippets to be referenced using a special syntax in the markdown file. The code from the sample is then pulled into the documentation by a build process and stays up to date as the sample changes. The syntax even supports highlighting specific lines in the code for clarity.

Building the sample was a little complicated. We were just into the preview 1 of ASP.NET Core 2.1 at this stage, but some of the features of IHttpClientFactory hadn’t made it into that first preview. I was using nightly builds to experiment with the latest available functionality, but getting this working in a sample application proved complicated. I needed to match up the nightly SDK and ASP.NET Core builds to get things building and at times it would simply not build at all!

In the end we agreed that I would wait for the preview 2 release before putting too much time into the sample. Once preview 2 landed I was able to pick it up and build out the code.

Working out how to demonstrate things simply and accurately, but using code that would be suitable if coped and pasted was a careful balance. IHttpClientFactory is made up of a lot of extension method calls to configure the client instances. I found that before long I had a long list for the various options in my Startup class. Breaking these into snippet regions meant I could include the relevant samples in the documentation and break up the method a little.

During my trip to Seattle for my first ever MVP summit I was lucky enough to meeting Ryan Nowak, Glenn Condron and Damian Edwards from the ASP.NET team to chat about the feature and the documentation. During the MVP hack day I spent a bit of time refining the code sample and even got a live code review on one part of it from Glenn and Damian! Thanks guys!

Review and Refinement

Over a period of about 2 months I worked on the documentation and the sample application. It took longer than I expected mostly due to keeping up with the changes and fitting it into my free time. Finally though, this week I was pretty much done and ready for a review by the team.

A few changes were suggested including fixing up some grammar and spelling as well as some feedback on preferred styles and idioms for the sample. It was very valuable feedback and has turned my content into what I hope is some useful documentation.

You can follow the history and comments on the pull request.

Summary

I want to conclude this post with a huge thank you to the team at Microsoft. I was little nervous when I first started out, but was quickly put at ease by everyone I worked with. Scott in particular was very patient and helpful. Luke Latham also offered some great feedback to tidy up the content and language of the documentation.

Glenn and Ryan helped with some questions about the IHttpClientFactory feature as well as offering advice for the sample.

Thanks also goes out to Dylan from the App vNext team who maintain Polly who helped with clarifying some of the Polly functionality and offered some great last minute suggestions to tidy up a few sentences. Having so many people cast an eye over the content really helped tune it and make it clear and concise.

I’ve really enjoyed the collaboration and I have personally learned a lot along the way. It’s a great feeling to have contributed to something which I hope will help many developers start using IHttpClientFactory in their code.

If you’re thinking that this journey sounds fun, I recommend you check out the contributing guide and take a look for an issue you can help with. Even small fixes and corrections are most welcome and the team are a great bunch of people to work with.

ASP.NET Core Dependency Injection – Registering Multiple Implementations of an Interface

In a previous post I covered registering generic types with Dependency Injection. This is one of the less common (and less documented) ways in which services could be registered with the Microsoft DI library. It turns out that you can do more with the DI available in the Microsoft.Extensions.DependencyInjection package than it may first appear.

Another “advanced” pattern that can be achieved is to register multiple concrete implementations for an interface. These can later be injected as an IEnumerable of that interface. In this post we’ll explore a quick example of how we can do that.

Let’s first discuss when and why you might want to do this. The example I have is based on a service I’ve been involved with previously. This service is responsible for reading messages from a Amazon SQS queue, enriching them and then saving them to ElasticSearch. Based on the property values in the message it conditionally enriches the data. Initially we only had a couple of possible enrichers, but over time we’ve added more.

The way we decided to implement this was to define an IEnricher interface. That interface looks a little like this:

There are two methods on the interface. The first is called CanEnrich and this will take the message object and determine if this enricher can do enrichment on the message. The second method, Enrich, then performs the actual enrichment of the message.

We can define zero or more implementations of this interface for the different enriching activities we may want to perform.

Here’s example of an enricher:

This enricher tries to lookup the city for any failed login messages coming through our queue. It can only do this if the IP Address is present.

And here’s another :

This enricher populates a DayOfWeek property, which is then used for aggregating in ElasticSearch. It can only do this if the incoming message contains a Date.

Both of are quite basic and contrived examples. The functionality isn’t that important here though.

The enrichers can now be registered with the ServiceCollection wherever that happens in your application:

It’s worth making it clear that the implementations will be added in the order they are registered. They will be returned in that same order when injected into calling code. Depending on your requirements, this may be useful and important. For this example we don’t really care what order we get them in.

To make use of these enrichers we can have them injected wherever we require them and DI is available. Since we’ve registered more than one instance, we ask the DI framework for an IEnumerable<IEnricher> which we can then enumerate over to access all implementations.

A simplified example of this in a caller would look like this:

Here I filter to only the enrichers which can enrich the message we are processing. Then I call the Enrich method on each one in turn.

Where this pattern proves particularly useful is if we imagine we now want to add another enricher. All we have to do is create a class which implements the interface and then ensure it is registered with DI. Now when our caller runs, that enricher will be included in the IEnumerable<IEnricher> which is injected. Our consuming code will make use of the new enricher without any further code changes.

The ASP.NET Core framework uses this same pattern in a number of places. One which I’ve covered in the past is the IHostedService interface. This allows you to define one or more “background” services to run whilst you application is alive. As with my enricher example, all you have to do is create a class implementing IHostedService and then register it with DI. When the application starts it will fire up any registered IHostedService instances in order.

Other posts in this series

ASP.NET Core Dependency Injection – How to Register Generic Types
ASP.NET Core Dependency Injection – Registering Implementations Using Delegates

HttpClientFactory in ASP.NET Core 2.1 (Part 3) Outgoing request middleware with handlers.

In my previous posts in this series (An Introduction to HttpClientFactory and Defining Named and Typed Clients) I introduced some core concepts and then showed some examples of using the new IHttpClientFactory feature in ASP.NET Core 2.1. It’s been a while since those first two posts but I’d like to continue this series by looking at the concept of outgoing request middleware with handlers.

IMPORTANT NOTE: the features shown here require the current preview build of the SDK and the .NET Core and ASP.NET Core libraries. I won’t cover how to get those in this post. At the time of writing we’re in preview 2 of .NET Core and ASP.NET Core 2.1. This preview should be reasonably feature complete but things may still change. If you want to try this out today you can get the preview 2 installers but I recommend waiting until at least the RC before producing any production code.

DelegatingHandlers

To be clear from the outset; many of the pieces involved in this part of the feature have existed for a long time. HttpClientFactory simply makes the consumption of these building blocks easier, through a more composable and clear API.

When making HTTP requests, there are often cross cutting concerns that you may want to apply to all requests through a given HttpClient. This includes things such as handling errors by retrying failed requests, logging diagnostic information or perhaps implementing a caching layer to reduce the number of HTTP calls on heavily used flows.

For those familiar with ASP.NET Core, you will also likely be familiar with the middleware concept. DelegatingHandlers offer an almost identical concept but in reverse; when making outgoing requests.

You can define a chain of handlers as a pipeline, which will all have the chance to process an outgoing HTTP request before it is sent. These handlers may choose to modify headers programmatically, inspect the body of the request or perhaps log some information about the request.

The HttpRequestMessage flows through each handler in turn under it reaches the final inner handler. This handler is what will actually dispatch the HTTP request across the wire. This inner handler will also be the first to receive the response. At this point that response passes back through the pieline of handlers in the reverse order. Again, each handler can inspect, modify or use the response as necessary. Perhaps for certain request paths you want to apply caching of the returned data for example.

 

IHttpClientFactory - DelegatingHandler outgoing middleware pipeline flow

In the diagram above you can see this pipeline visualised.

 

Much like ASP.NET Core middleware, it also possible for a handler to short-circuit the flow and return a response immediately. One example where this might be useful is to enforce certain rules you may have in place. For example, you could create a handler which checks if an API key header is present on outgoing requests. If this is missing, then it doesn’t pass the request along to the next handler (avoiding an actual HTTP call) and instead generates a failure response which it returns to the caller.

Before IHttpClientFactory and its extensions you would need to manually pass a handler instance (or chain of handlers) into the constructor for your HttpClient instance. That instance will then process any outgoing requests through the handlers it has been supplied.

With IHttpClientFactory we can more quickly apply one or more handlers by defining them when registering our named or typed clients. Now, anytime we get an instance of that named or typed client from the HttpClientFactory, it will be configured with the required handlers. The easier way to show this is with some code.

Creating a Handler

We’ll start by defining two handlers. In order to keep this code simple these aren’t going to be particularly realistic in terms of function. They will however show the key concepts. As we’ll see in future posts, there are ways to achieve similar results without having to write our own handlers.

To create a handler we can simply create a class which inherits from the DelegatingHandler abstract class. We can the override the SendAsync method to add our own functionality.

In our example this will be our outer request. A StopWatch will be started before calling and awaiting the base handler’s SendAsync method which will return a HttpResponseMessage. At this point the external request has completed. We can log out the total time taken for the request to flow through any other handlers, out to the endpoint over HTTP and for the response to be received.

Just to keep things interesting let’s create a second handler. This one will check for the existence of a header and if it is missing, will return an immediate response, short-circuiting the handler pipeline and avoiding an unnecessary HTTP call.

Registering Handlers

Now that we have created the handlers we wish to use, the final step is to register them with the dependency injection container and define a client. We perform this work in the ConfigureServices method of the Startup class.

The first two lines register each handler with the service collection which will be used to build the final service provider. These need to be transient so that a new instance is provided each time a new HttpClient is created.

Next, we define a client. In this example I’m using a named client for simplicity. Check out my previous post in this series for more detail about named and typed clients. The AddHttpClient method in this case returns an IHttpClientBuilder. We can call additional extension methods on this builder. Here we are calling the generic AddHttpMessageHandler method. This method takes the type for the handler as its generic parameter.

The order of registration matters here. We start by registering the outer most handler. This handler will be the first to inspect the request and the last to see the response. In this case we want our timing handler to record complete time taken for the whole request flow, including time spent in any inner handlers, so we have added it first. We can call AddHttpMessageHandler again, this time with our ValidateHeaderHandler handler. This will be our final custom handler before the inner HttpClientHandler is passed the request to send the request over the network.

At this point we now have an outgoing middleware pipeline defined on our named ‘github’ client. When a request comes through this client it will first pass into the TimingHandler, then into the ValidateHeaderHandler. Assuming the header is found the request it will be passed on and sent out to the URI in the request. When the response comes back it first returns through the ValidateHeaderHandler which does nothing with the response. It then passes onto the TimingHandler where the total elapsed time is logged and then finally is returned to the calling code.

Summary

While I have shown how easy it is to create a DelegatingHandler and then add it to your HttpClient outgoing pipeline using the new extensions; the team hope that for most cases you will not find yourself needing to craft your own handlers. Common concerns such as logging are taken care of for you within IHttpClientFactory (we’ll look at logging in a future post). For more complex but common requirements such as retrying failed requests and caching responses a much better option is to use a third party library called Polly. The team at Microsoft have made a great decision integrate with Polly.

In my next post I’ll investigate the options for adding Polly based handlers with IHttpClientFactory. In the mean time I suggest you check out this post by Scott Hanselman where he covers the Polly extensions. You can also check out the Polly wiki for more information.

Other Posts in this Series

Part 1 – An introduction to HttpClientFactory
Part 2 – Defining Named and Typed Clients
Part 3 – This post
Part 4 – Integrating with Polly for transient fault handling

Library Manager (LibMan) in Visual Studio 2017 (15.7) How to restore client side libraries in ASP.NET Core projects.

UPDATE: 24-May-2018 – It looks like LibMan didn’t make it into the final release of 15.7. It’s in the preview for 15.8.x currently so we may see it when that version lands.

I recently started working on an ASP.NET Core 2.1 Preview 2 sample project. Having been mostly API focused recently, it was the first time that I’d done much with a site that actually renders views for a while. I found myself needing to re-learn my options for client side libraries.

My work on Humanitarian Toolbox is view based, however the project has grown up over a number of years and relies on a combination of NPM and gulp to bring in the required client side libraries.

For a brand new project, the templates will include a lib folder in wwwroot which has the necessary files which support the template features. However, if you’re checking into git, it’s quite normal to not include the lib folder, instead requiring the packages to be pulled on the client machine once cloned.

Full disclosure: I don’t consider myself a client side expert. I’m far more comfortable with the server side code! As such, this post is based on my own (potentially naive) approach to working with client side libraries.

Library Manager

Library Manager is a new feature included in Visual Studio 2017 (as of 15.7 preview 3) that provides new support for managing client side libraries in your projects. In this post I’m going to explore the basics of how I used this in a new project.

To add the Library Manager functionality to a project, simply right click on the project and choose the “Manage Client-Side Libraries…” option.

Manage client side libraries in Visual Studio 2017

This will add a single file to your project called libman.json. Note in the screenshot that at this point I don’t have a lib folder under wwwroot.

libman.json in solution

The libman.json file will be nearly empty when it’s first added. It includes a version and default provider. There is also an empty array of libraries defined. This is where we’ll add the packages we need for our project.

Default empty libman.json

In my example, my front end views only need bootstrap and jquery at this stage. Let’s look at how we can add those to our project. Each library is added as an object. There are a few properties we can set. The tooling offers autocomplete which makes populating this file a pretty straightforward experience. Here is an example of my final libman.json file.

Taking the boostrap entry as an example. The first value I provide is the name and version of the required library – “twitter-bootstrap@3.3.7”.

Next is the destination for the restored files relative to your project. In this case I’m including them under the wwwroot folder, in a directory called lib and then bootstrap.

Finally, in this example, I specify the individual files I want restored. This is optional. If you don’t include the array of files then all files from the library will be included. I preferred to be a little more selective about those that I needed here.

There’s also a value I’m not providing here to override the provider from which the library should be restored from. The provider options at this stage are cdnjs or filesystem.

Upon saving this file, the required libraries will be restored into your specified directory.

Restored lib folder in Visual Studio 2017

If you want to force a restore, perhaps after first cloning a project, you can do so by right clicking the libman.json file and choosing “Restore Client-Side Libraries”.

Restore client side libraries with lib man in Visual Studio 2017

Another option on this context menu is the “Enabled Restore on Build…” option. If you choose this option it will add a NuGet package to the project which will trigger the specified libraries to be restored on build. This is useful for CI / build servers for example (although I’ve not tested that at this stage). Choosing this option will present you with a dialog to confirm you wish to include the NuGet package.

Confirm adding Microsoft Library Manager build package

Once you do this a PackageReference will be added to you csproj file for “Microsoft.Web.LibraryManager.Build”.

CSPROJ after adding LibraryManager.Build

You’ll see the output from this when building your project. In this example I deleted the lib folder before triggering my build and you can see the files being restored as necessary.

Build output from libman.json (Library Manager)

That’s it for this post! I’ve not gone too deep into the tooling but so far this feels like a nice integrated way to specify and restoring client side libraries. Certainly I was able to get going with it pretty quickly and if it means I don’t need to learn about other client-side package managers and tooling, I’m pretty happy with that! I’m sure more seasoned client side developers will be better placed to judge this against the various other ways we can manage client side packages.

Updates to my ASP.NET Core Correlation ID Library Supporting correlation IDs across ASP.NET Core microservices.

Back in May 2017 I blogged about creating a simple library which supports passing correlation IDs between ASP.NET Core micro-services. The library came about because of a basic requirement we had at work to pass an identifier between related services to enable more useful error logging. By passing an identifier from the first service, through to any further services it then calls; if an exception occurs, we can quickly search for the entire history of that request across the distributed environment.

Since I released that first version to NuGet I have been staggered by the download stats. According to NuGet it now has nearly 27,000 downloads at the time of writing this post. I never really expected it to be used that heavily so this is a really pleasant surprise. I’m very pleased that something I’ve created is helping others with a similar requirement. It is a little daunting to think that so many people are dependent on that library in their code!

Three months ago I released version 2.0 of the library which added the concept of a CorrelationContext. This was a something I’d been considering almost immediately after completing version 1.0. An issue with v1 was that I’d chosen to set the TraceIdentifier on the HttpContext to match the correlation ID being passed in via the request headers. In controllers, where the HttpContext is accessible, this was not a major issue since the value of TraceIdentifier could then be read and used in logging. However, to use the correlation ID elsewhere, the only way to access it was via the IHttpContextAccessor. This isn’t registered in ASP.NET Core by default and so for some users of the library meant they would have to register it to make full use of the correlation ID.

I based my version 2 changes on the HttpContext and HttpContextAccessor in ASP.NET Core and this seems to have worked quite nicely so far. This required a breaking change for the library since it needed to register some services to support the new CorrelationContext.

Today I released version 2.1 of the library. This version adds two new configuration options that can be set when registering the middleware. One of these options came as a result of a GitHub issue requesting that it be possible to disable updating the TraceIdentifier with the correlation ID. This is now possible since the ID is passed around in the CorrelationContext and I needn’t rely on the HttpContext. To avoid breaking changes I added the option with it default setting behaving as it did before. I may look to change this default in the next major release.

I took the opportunity to add another new option that determines if the correlation ID should be matched to the TraceIdentifier or whether it should be a GUID in situations where an ID is not present in the header. For some users I can see this being useful and at work I’m considering the move to a GUID for our correlation ID.

A final change I was able to incorporate came as a result of another feature request via the projects GitHub issues. In this case it was to include the configured correlation ID header name on the CorrelationContext. This resulted in the first external PR on the project which I was very happy to receive. Thanks to Julien for his contribution.

I hope that people using the library are happy with these changes. As ever I’m happy to take feedback and ideas via GitHub if there are use cases that it doesn’t currently support.

UPDATE: It seems I wasn’t as careful as I thought about breaking changes. One did slip through in this release as I updated the interface and implementation for the ICorrelationContextFactory to support the new property on the context. If you’re consuming the library this is not something you generally need to access but if you’re mocking for unit testing it’s possible this will break there. Apologies! Turns out it’s harder than you think avoid breaking changes when released publicly!