ASP.NET Core 2.2 First Look – Endpoint Routing (aka Dispatcher) What is Endpoint Routing in ASP.NET Core 2.2?

The first preview of ASP.NET Core 2.2 is due (very) soon and it will be our first chance to test the changes and new features which are expected to be released by the end of this year.

ASP.NET Core 2.2 marks the next minor release of the ASP.NET Core framework, following on from the ASP.NET Core 2.1 release at the end of May. With these minor releases, the team are able to provide new features to us quickly and iterate on an already great product. ASP.NET Core 2.2 will be a “Current” release for those wanting to take advantage of the latest features more quickly. For some, the “current” releases may not be suitable as while offering newer features they do not offer the same long support commitment as long-term support (LTS). If you choose the “current” release train then you need to keep updating as new “current” releases are available (See the ASP.NET Core support policy for more details).

I’ve been watching the features planned for ASP.NET Core 2.2 since version 2.1 shipped. There are quite a few things which look pretty interesting. Today I want to take an early look at a new feature whose final name is Global Endpoint Routing. You may also see this referred to by its former name “Dispatcher”.

Update 18-08-2018: James Newton-King who works on the ASP.NET team, has confirmed that the name for this feature with be Endpoint routing.

NOTE: This post is based on reading and watching some limited early coverage of Dispatcher (Global Routing) from the ASP.NET team over the last few months. I’ve also spent a small amount of time looking at the source code and some early samples to form the content below. As a result, there are things I may not have 100% correct! This post is written prior to the first preview of ASP.NET Core 2.2 and as a result, some of this could change before the final release of ASP.NET Core 2.2. If you’re reading this in the future, treat it with some caution and keep an eye out for more recent posts where I will cover using the released versions of the feature. If you’re reading this in the past, I’m pleased to see that time travel is now a thing!!

Update 18-08-2018: After feedback from James Newton-King who works on the ASP.NET team, I understand that in the 2.2 timeframe the only supported way to use Endpoint Routing will be via MVC.

What is Global Endpoint Routing (the feature formerly known as Dispatcher)?

The current routing story with ASP.NET Core 2.1 MVC is that we define routes within MVC using either convention-based routing which uses a RouteBuilder when we register MVC into the middleware pipeline or via Route Attributes on our action methods. Personally, I tend towards the attribute approach for the APIs I build.

When a request reaches MVC, the route information is used to determine which Controller and Action should be invoked to handle the request, based on the URL path. At this point, MVC can invoke that action which will then return the response. If a suitable route is not found then a 404 is returned instead.

One issue with this approach to routing is that as the request flows through the middleware chain, all other middleware has no idea of where that request is eventually going to be handled.

With the Global Endpoint Routing feature, routing decisions can occur earlier into the pipeline so that incoming requests can be mapped to their eventual endpoint before MVC is even invoked. This gives other middleware an opportunity to make more reasoned choices and process requests armed with the knowledge of which Endpoint will eventually handle the request.

Implemented as middleware the URL matching can now be registered to run very early in the application middleware pipeline. After which, a set of known Endpoints will be registered. When a request is passed through the pipeline, other middleware will be able to inspect the final matched Endpoint and view/update its associated metadata.

In addition, Global Endpoint Routing looks like it should allow developers to build ASP.NET Core applications that don’t use MVC, but which can still rely on routing to handle requests. Developers can provide their own EndpointDataSource, complete with a set of route patterns and the corresponding RequestDelegates to be invoked for matches. There are opportunities here for some very lightweight APIs to be built without the sometimes unnecessary additional overhead of MVC.

Note: Based on the feedback from James Newton-King this feature is expected to work with MVC as the support way to use it in 2.2. For the 3.0 timeframe a public API for non MVC use is expected to be ready.

The original ASP.NET Core 2.2 features announcement (where the feature is known as “Dispatcher”) included a good example of where this Global Endpoint Routing concept can be useful. CORS is currently implemented as both a feature of MVC and as middleware. This is necessary as some requests will be handled outside of MVC, such as serving static files. However, to apply CORS for requests which do reach MVC, the policy can’t be applied until the MVC routing selects the controller and action for the request. Global Routing will enable CORS to be implemented more consistently since the routing and Endpoints will be known up front when a request is matched through the middleware.

Global Endpoint Routing is also integrated with the latest ASP.NET Core 2.2 MVC functionality, allowing MVC to work on top of this new Global Endpoint Routing feature. MVC in ASP.NET Core 2.2 includes code changes to support building up a list of available Endpoints for Global Endpoint Routing, using the existing MVC route provider.

I plan to dive into all of this more deeply in future posts. I’m currently spending a fair bit of time in the Microsoft.AspNetCore.Routing source code understanding the building blocks and Types involved.

A Small Sample

I’ll end with a basic example of what the services and middleware pipeline for an ASP.NET Core 2.2 application using Global Endpoint Routing could look like. In this case, there’s no MVC involved (this is based on the current RoutingSample from the Routing repository).

Update 18-08-2018: After feedback from James Newton-King who works on the ASP.NET team, I understand that in the 2.2 timeframe the only supported way to use Endpoint Routing will be via MVC.

Inside the Startup class, we must register the routing services in the ConfigureServices method as follows…

This sample manually adds a DefaultEndpointDataSource with a single Endpoint which is not necessary if you wish to use the MVC routes. MVC will populate Endpoints for you.

Once registered we can add Global Endpoint Routing to the pipeline as middleware in the Configure method…

The first middleware, UseEndpointRouting is responsible for inspecting the incoming request and matching it to an Endpoint. The matched Endpoint is set in the HttpContext.Features collection. At this point, all other middleware can retrieve the Endpoint from the Features collection as necessary.

Since we’re not using MVC in this sample we need something to invoke the matched RequestDelegate for the Endpoint. UseEndpoint registers the middleware which does exactly that.

Summary

Hopefully, this has been a useful primer of what to expect with the Global Endpoint Routing feature in ASP.NET Core 2.2. I expect this to be available in the first preview of ASP.NET Core 2.2 (perhaps due by the end of the month). I have started building up some fairly extensive notes as I explore the code and I hope to put together some follow up posts demonstrating more concrete samples once the previews are formally available (and at which point the API surface is less likely to change). I may also produce a deeper dive into how this all functions internally as its quite interesting code to explore.

How to record metrics to DataDog from ASP.NET Core Recording metrics and events to DataDog in development and production in AWS ECS.

I’m currently architecting and building a new microservices based system at work. A priority of mine has been to learn from the experience of building our first microservices project by putting a greater focus on logging and monitoring. We freely acknowledge that we didn’t get this as good as we would have liked in our first implementation. Logging is hugely important in a microservices design. Tracking down failures and bugs can be very difficult without good logging and even more importantly, a way to search and view those logs.

But logging isn’t what I want to cover in this post. Alongside logging, something we felt we were missing in our previous project was detailed performance and usage metrics. We can gain some insight into the state of the system using built-in AWS metrics for our ECS services and also by monitoring, again via CloudWatch things such as SQS queue metrics. This is all reasonably useful but at times it would be great to know a bit more about the services themselves.

With this new project, I’ve started instrumenting it with DataDog so that the application can record useful metrics that allow us to track it’s overall health, as well as understand how things are performing and what’s actually being used. The goal is to build up a much better profile of the application so that we can more easily see whether changes that we have deployed have improved the performance or whether they have degraded it. Having proper baselines of the application performance is key to being able to make such determinations.

The choice of DataDog was made, simply because our systems team use it already for monitoring the overall AWS environment. After a few quick enquiries, I found that recording metrics from the application code would be possible thanks to the availability of a C# library from DataDog called “dogstatsd-csharp-client”. My only concern is that the library seems to be a little stalled, with no commits since Nov 30th 2017. There is also a third party library called DatadogSharp which I may investigate in the future. Their readme suggests it is more performant than DataDog’s own library. Putting that aside for now; I decided I’d go ahead and prototype based on the DataDog library and revisit these concerns if they proved founded, once I was underway. I have built a small wrapper library over the static DogStatsd methods so I have a layer of abstraction if I decided to change the underlying client. It also allows me to centralise the use of some common tags which I wanted to include on all of my metrics.

In this post, I want to focus on the slightly higher level concepts by discussing how I got this working during development and more recently, deploying to a prototype in production running in AWS ECS.

Development

The DogStatsd client sends messages over UDP to an agent server which will collect these and eventually send them up to the DataDog service. You can read more about that flow on the DataDog DogStatsD page. Therefore, to test my code during development, I needed to have an agent server running somewhere.

In my case, I opted for a Docker image which I could run locally. After a bit of Googling, I ended up at the DogStatsD6 Docker Image GitHub repository. This image is hosted on the Docker Hub in the DataDog DogStatsD repository.

To begin working with this locally, I created a simple docker-compose file. I’ve not included some other elements I was running here for logging via the ELK stack as those are not necessary for this example. My docker-compose file looked like this:

The above example specifies a dogstatsd service which will be started using the DogStatsD6 image. By default, this DogStatsD agent server will be listening on UDP port 8125 for data. I expose that port to port 8125 on my host (development) machine. For the DogStatsD server to be able to send its data to DataDog we need to provide an API key. This can be done by passing the key into the DD_API_KEY environment variable.

At this point, I could run a docker-compose up -d command to start an instance of the DogStatsD container.

Sending Metrics and Events from an ASP.NET Core based API

As I mentioned earlier, I’ve created a wrapper library for sending DataDog metrics and events via the methods from the dogstatsd-csharp-client library. I won’t be covering that here as I want to keep this post focused on the core principles. Instead, let’s look at how we can begin with recording metrics and events to DataDog.

The first step, before we can send data to DataDog is to add the Nuget package for the dogstatsd-csharp-client library.

You can do this by searching for it via the NuGet package manager, or as I did, by editing the csproj file directly and adding the package reference.

<PackageReference Include="DogStatsD-CSharp-Client" Version="3.1.0" />

Once the library is added we must configure the client. All of the functionality for this client library is implemented as static methods. The first method we will call is the DogStatsd.Configure method. This takes a StatsdConfig object as a parameter. We’ll add a small static method in our Startup.cs class as follows:

We build up the configuration object using a server name string which we will pass into this helper method. This will be the hostname or IP of the server running the DogStatsd server. We also set the port it will be listening on. Finally, we can add a prefix which will be added to the front of any metric and event names. This is useful to make it easier to search for them in the DataDog application.

We then call the Configure method, passing in the config object.

Next, we need to call this method. We can do that from our Configure method in the Startup class. This is called once at application startup so is an appropriate place to do this. First, we’ll load the DogStatsd server name from ASP.NET Core configuration as this allows us to easily set it per environment. We’ll then call our ConfigureDataDog method, passing that server hostname through:

We’ll need to include a section for this configuration in our appsettings.json file:

During development, we can set this to the localhost IP address (127.0.01). Since we are running are DogStatsd server within a container and exposing its port through to the host we can access it this way. In production, we can then set an appropriate production hostname for the DogStatsd server.

At this point, everything is ready to go. We can now use the static DogStatsd methods to send metrics and events from our code.

The client library readme includes examples of the methods we can call but as a basic example, here’s some code we can call from our application to record a metric whenever a profiles endpoint is called.

DogStatsd.Increment("all_profiles_endpoint_call");

At this stage, we have added basic metrics to our application and have a DogStatsd server running locally in a container to collect and forward those metrics to DataDog.

Production

NOTE: Since trialling this approach, we have learned that because each sidecar is considered a new host, this may have cost implications with the DataDog pricing model. We are moving away from this approach to daemon mode containers (1 per ECS host). I will update accordingly once we are happier with that approach. For now, this stills works, but do consider how it affects your billing.

The final stage is to get this working in production. I felt there might be a few options here but the one I wanted to try was to run the DogStatsd server as a sidecar container. We use AWS Elastic Container Service (ECS) to run our production services as Docker containers.

To achieve this we will add a second container to our task definition for our ECS service. ECS services are the logical structure that represents a unit of scale and deployment for containers in ECS. The service can run multiple containers which generally is not something you need or want to do since it limits your scaling options. However, for this requirement, running a second supporting container, it is a reasonable use case. This container is directly used only by our main microservice and should scale with that service.

The final ECS task definition looks like this:

The new section in my case is the dogstatsd container definition near the bottom (lines 39-51). The task definition takes one or more container definitions which define the containers that will be started as part of the tasks for this service.

The dogstatsd one is quite simple. We tell it to use the same image as we used in development, “datadog/dogstatsd”, which will be pulled from the Docker Hub. We can pass in the API Key via an environment variable. We can mark this container as non-essential since we don’t want its failure to cascade to causing our service to restart unexpectedly.

The next change is to add the links to the main API server container definition (lines 35-37). This ensures that it will have a network link with the dogstatsd container. Finally, we can add an environment variable for the API container which sets the hostname for the DogStatsD server. This will override the 127.0.0.1 setting from our appsettings.json file. Since we have linked the containers we can reference the DogStatsd server by its container name.

With these changes made it was then a case of registering this new task definition and updating our ECS service. When the service starts tasks, it will start an instance of the DogStatsd container which our application containers can use to record their metrics and events.

Summary

It’s still early days, but this is working quite nicely in the production environment. I will be reviewing it over time as I learn more about the various elements at play. This is likely not the end of the story or our journey with DataDog. Hopefully, this is enough information to get you started if you are following a similar path. 

Contributing to the Microsoft ASP.NET Documentation My experience of writing for docs.microsoft.com

Back in February I spotted an issue on the ASP.NET Core Docs repository. The issue was a requirement for new documentation about the IHttpClientFactory feature being added in ASP.NET Core 2.1.

I’d been following the work on IHttpClientFactory for a while and had written a couple of posts about the functionality based on the nightly builds. I’d been keen to try my hand at helping with the docs.microsoft.com site for a while and this seemed like a great chance to contribute. In this post I’ll wanted to share my experience.

What is docs.microsoft.com?

Before I go further I should explain what docs.microsoft.com is. It’s Microsoft’s new (introduced in 2016) documentation portal, built from the ground up as a replacement for a number of prior documentation sites such as TechNet and MSDN. I believe it started with documentation for .NET Core / ASP.NET Core and has quickly grown to include documentation for many other Microsoft products and services.

One of the exciting and interesting changes is that many of the product teams have made their documentation open source. This allows members of the community to submit content and correct any mistakes they find. With more rapidly evolving products, keeping up is a real challenge. Allowing the community to assist is a great move and I believe has really improved the quality and usefulness of the content.

Additionally, an amazing team of managers and content writers has been assembled at Microsoft to build and maintain the improved documentation.

Getting Involved

So, back to my experience!

The first step was to show an interest and reach out to offer my contribution. I did this by commenting on the GitHub issue for the feature.

Scott Addie, a senior content developer at Microsoft who works on the ASP.NET docs team quickly responded to take up my offer. At this point we started to outline the content in collaboration with Ryan and Glenn from the ASP.NET team. You can follow that part of the story in the issue comments.

Once the plan and outline was agreed I was free to start work on the content.

Documentation

To get started, all I needed to do was to fork and clone the repository on GitHub. If those concepts are new to you, check out my YouTube playlist for some introductory videos on how that process works.

With the docs content on my machine I started a branch and begin outlining some content. All of the documentation is written in markdown. I prefer to use Visual Studio Code for markdown editing so I opened it up and created a document.

I began in the same was as I do for my blog posts, by outlining some initial sections and jotting down some notes. I like to organise my plan early on as it allows me to figure out a flow for the content and how I will build up information. Once the outline was in place I started to flesh out content. Often this required me to research the feature by trying out samples and reading the source code. Writing documentation (and blog posts) is a great way to learn more about a product or feature as you really look more closely than you might normally.

I made an initial pull request to the docs repository so that I could show the team my work and get feedback as I went. Being new to contributing I wanted to make sure I was on the right track. Scott was quickly on the case and offered some valuable feedback.

One of the big differences between writing for my blog and writing for the documentation is the style. In my blog I’m often referring to my personal experience and sharing with readers in a conversational style. For the docs the style is less personal and more concise. When reading documentation, users are often looking to get going quickly so producing tight, clear content is important. At first I found the switch in styles a little difficult, but after a while I was having to correct myself less often.

From there the work continued whenever I could get time to spend on the documentation. I was making contributions in the evenings, weekends and during my lunch breaks at work.

Building a Sample Application

One of the other requirements for good documentation is a sample application. These are often made available in the docs repository and are intended to provide a quick start for developers. Scott asked if it would be possible for me to produce the sample for the feature. The sample would also be used to support code snippets in the documentation. This is a really clever part of the docfx functionality which allows code snippets to be referenced using a special syntax in the markdown file. The code from the sample is then pulled into the documentation by a build process and stays up to date as the sample changes. The syntax even supports highlighting specific lines in the code for clarity.

Building the sample was a little complicated. We were just into the preview 1 of ASP.NET Core 2.1 at this stage, but some of the features of IHttpClientFactory hadn’t made it into that first preview. I was using nightly builds to experiment with the latest available functionality, but getting this working in a sample application proved complicated. I needed to match up the nightly SDK and ASP.NET Core builds to get things building and at times it would simply not build at all!

In the end we agreed that I would wait for the preview 2 release before putting too much time into the sample. Once preview 2 landed I was able to pick it up and build out the code.

Working out how to demonstrate things simply and accurately, but using code that would be suitable if coped and pasted was a careful balance. IHttpClientFactory is made up of a lot of extension method calls to configure the client instances. I found that before long I had a long list for the various options in my Startup class. Breaking these into snippet regions meant I could include the relevant samples in the documentation and break up the method a little.

During my trip to Seattle for my first ever MVP summit I was lucky enough to meeting Ryan Nowak, Glenn Condron and Damian Edwards from the ASP.NET team to chat about the feature and the documentation. During the MVP hack day I spent a bit of time refining the code sample and even got a live code review on one part of it from Glenn and Damian! Thanks guys!

Review and Refinement

Over a period of about 2 months I worked on the documentation and the sample application. It took longer than I expected mostly due to keeping up with the changes and fitting it into my free time. Finally though, this week I was pretty much done and ready for a review by the team.

A few changes were suggested including fixing up some grammar and spelling as well as some feedback on preferred styles and idioms for the sample. It was very valuable feedback and has turned my content into what I hope is some useful documentation.

You can follow the history and comments on the pull request.

Summary

I want to conclude this post with a huge thank you to the team at Microsoft. I was little nervous when I first started out, but was quickly put at ease by everyone I worked with. Scott in particular was very patient and helpful. Luke Latham also offered some great feedback to tidy up the content and language of the documentation.

Glenn and Ryan helped with some questions about the IHttpClientFactory feature as well as offering advice for the sample.

Thanks also goes out to Dylan from the App vNext team who maintain Polly who helped with clarifying some of the Polly functionality and offered some great last minute suggestions to tidy up a few sentences. Having so many people cast an eye over the content really helped tune it and make it clear and concise.

I’ve really enjoyed the collaboration and I have personally learned a lot along the way. It’s a great feeling to have contributed to something which I hope will help many developers start using IHttpClientFactory in their code.

If you’re thinking that this journey sounds fun, I recommend you check out the contributing guide and take a look for an issue you can help with. Even small fixes and corrections are most welcome and the team are a great bunch of people to work with.

Library Manager (LibMan) in Visual Studio 2017 (15.7) How to restore client side libraries in ASP.NET Core projects.

UPDATE: 24-May-2018 – It looks like LibMan didn’t make it into the final release of 15.7. It’s in the preview for 15.8.x currently so we may see it when that version lands.

I recently started working on an ASP.NET Core 2.1 Preview 2 sample project. Having been mostly API focused recently, it was the first time that I’d done much with a site that actually renders views for a while. I found myself needing to re-learn my options for client side libraries.

My work on Humanitarian Toolbox is view based, however the project has grown up over a number of years and relies on a combination of NPM and gulp to bring in the required client side libraries.

For a brand new project, the templates will include a lib folder in wwwroot which has the necessary files which support the template features. However, if you’re checking into git, it’s quite normal to not include the lib folder, instead requiring the packages to be pulled on the client machine once cloned.

Full disclosure: I don’t consider myself a client side expert. I’m far more comfortable with the server side code! As such, this post is based on my own (potentially naive) approach to working with client side libraries.

Library Manager

Library Manager is a new feature included in Visual Studio 2017 (as of 15.7 preview 3) that provides new support for managing client side libraries in your projects. In this post I’m going to explore the basics of how I used this in a new project.

To add the Library Manager functionality to a project, simply right click on the project and choose the “Manage Client-Side Libraries…” option.

Manage client side libraries in Visual Studio 2017

This will add a single file to your project called libman.json. Note in the screenshot that at this point I don’t have a lib folder under wwwroot.

libman.json in solution

The libman.json file will be nearly empty when it’s first added. It includes a version and default provider. There is also an empty array of libraries defined. This is where we’ll add the packages we need for our project.

Default empty libman.json

In my example, my front end views only need bootstrap and jquery at this stage. Let’s look at how we can add those to our project. Each library is added as an object. There are a few properties we can set. The tooling offers autocomplete which makes populating this file a pretty straightforward experience. Here is an example of my final libman.json file.

Taking the boostrap entry as an example. The first value I provide is the name and version of the required library – “twitter-bootstrap@3.3.7”.

Next is the destination for the restored files relative to your project. In this case I’m including them under the wwwroot folder, in a directory called lib and then bootstrap.

Finally, in this example, I specify the individual files I want restored. This is optional. If you don’t include the array of files then all files from the library will be included. I preferred to be a little more selective about those that I needed here.

There’s also a value I’m not providing here to override the provider from which the library should be restored from. The provider options at this stage are cdnjs or filesystem.

Upon saving this file, the required libraries will be restored into your specified directory.

Restored lib folder in Visual Studio 2017

If you want to force a restore, perhaps after first cloning a project, you can do so by right clicking the libman.json file and choosing “Restore Client-Side Libraries”.

Restore client side libraries with lib man in Visual Studio 2017

Another option on this context menu is the “Enabled Restore on Build…” option. If you choose this option it will add a NuGet package to the project which will trigger the specified libraries to be restored on build. This is useful for CI / build servers for example (although I’ve not tested that at this stage). Choosing this option will present you with a dialog to confirm you wish to include the NuGet package.

Confirm adding Microsoft Library Manager build package

Once you do this a PackageReference will be added to you csproj file for “Microsoft.Web.LibraryManager.Build”.

CSPROJ after adding LibraryManager.Build

You’ll see the output from this when building your project. In this example I deleted the lib folder before triggering my build and you can see the files being restored as necessary.

Build output from libman.json (Library Manager)

That’s it for this post! I’ve not gone too deep into the tooling but so far this feels like a nice integrated way to specify and restoring client side libraries. Certainly I was able to get going with it pretty quickly and if it means I don’t need to learn about other client-side package managers and tooling, I’m pretty happy with that! I’m sure more seasoned client side developers will be better placed to judge this against the various other ways we can manage client side packages.

ASP.NET Core Anatomy – How does UseStartup work? Exploring how UseStartup results in your Startup methods being registered and executed.

I was recently explaining to someone the basics of the program flow for an ASP.NET Core application. One of the things included in the templates for ASP.NET Core and used very often is the UseStartup<T> extension method on the IWebHostBuilder. This gets called from our Program.cs when initialising the application. UseStartup allows us to set the Startup class which defines the services and middleware pipeline for an ASP.NET Core application.

During my explanation I realised that while I know the result of calling this method, I didn’t know how things are wired up under the hood; so I decided to investigate!

NOTE 1: This content is valid as at the 2.0.0 release codebase. I don’t expect that the fundamentals will change dramatically in the future but I have seen some commits which tweak the code a little for 2.1!

NOTE 2: This is a deep dive blog post looking at internal ASP.NET Core code. You don’t need to know this to use ASP.NET Core to build applications – please don’t let this scare you off! This is intended for those of you, who like me, have a curious mind about the internals of ASP.NET Core. I’m conscious that this may get quite hard to follow as we get deep into the guts of the code as there’s a lot of use of delegates that makes explaining the flow quite challenging. I’ll try my best to make it clear!

How are Startup methods registered and executed?

The generic UseStartup<TStartup> method calls down to the main IWebHostBuilder UseStartup(this IWebHostBuilder hostBuilder, Type startupType) extension, passing in the Type for the Startup class. That method looks like this (full source on GitHub):

The ConfigureServices method on the IWebHostBuilder is called which expects an Action<IServiceCollection> parameter. In this case it’s defined as a lambda expression which acts on the IServiceCollection. The WebHostBuilder class has a private List<Action<WebHostBuilderContext, IServiceCollection>> field named _configureServicesDelegates. The call to ConfigureServices in the code above will add the lambda as a new item to this list which will be used later to construct an instance of IStartup.

At this stage a list of delegates will have been registered within the WebHostBuilder. The main process begins when Build() is called on the WebHostBuilder. I won’t cover everything that Build does, since it’s not all relevant to the scope of this post. Here’s the code of that method (full source on GitHub):

The Build method calls a private method BuildCommonServices (full source on GitHub) which as the name suggests will add some common framework services into the current ServiceCollection. I’ll skip over most of the code in this method. Unless we’ve changed the WebHostOptions to define different assemblies to load Startup from, the execution flow will eventually hit code which looks over the _configureServicesDelegates List, calling each delegate in turn. That piece of code looks like this:

In the sample application which I used in order to debug through the Hosting codebase, I have two delegates registered, one from a FakeServer (needed so that the WebHostBuilder doesn’t throw an exception) and one from the UseStartup call. The first delegate simply registers the FakeServer as the implementation for IServer inside the ServiceCollection. The second delegate will now execute the lambda expression which was registered in the UseStartup method. Let’s remind ourselves what that looked like:

If our Startup class implements IStartup directly, it can and will be registered as the implementation type for IStartup directly. In my sample (which is based on the default ASP.NET Core templates) our Startup class does not implement IStartup and will rely on conventions instead. In this case an AddSingleton overload is used which takes the Func<IServiceProvider, object) as it’s implementation factory. This Func delegate will be called when the first concrete IStartup implementation is requested from the DI container.

At this point the registration of IStartup is included in the ServiceCollection and ready to be called by the framework. The WebHostBuilder.Build method continues to execute and constructs a new WebHost instance, which includes passing in an IServiceCollection (a clone of the current hostingServices variable). It also passes in a ServiceProvider, built using the current state of the hostingServices ServiceCollection. This represents the application services which have been registered so far by the framework.

Once we have a WebHost instance, its Initialize method is called. This calls down to a private BuildApplication method (full source on GitHub). Hold tight, lots of stuff starts to happen at this stage. I’ll try to pick out the parts I think are relevant to the use of our Startup class.

The BuildApplication method does some basic checks to make sure the relevant services are available for the application to start. One of the checks is that the ServiceProvider which was passed in includes an implementation for IStartup. This particular check happens inside a helper method called EnsureStartup()

The call to _hostingServiceProvider.GetRequiredService<IStartup>(); will trigger the DI framework to construct an instance of IStartup as per its registration. Due to the use of delegates, we need to head back to the UseStartup method to look at the lambda we passed in for the Func<IServiceProvider, object) implementationFactory. As a reminder, here’s the service registration that was used:

We can see that we’re going to get returned a newly constructed ConventionBasedStartup instance as the implementation for IStartup. The ConventionBasedStartup constructor accepts a StartupMethods object (full source on GitHub) as its parameter. A static StartupLoader.LoadMethods method (full source on GitHub) is used to generate a StartupMethods instance. This object acts as a holder for three properties.

The most important of these properties for this discussion are the delegates for the ConfigureServices and Configure. These will be setup with the code which should execute when the framework calls these methods further down in the WebHost initialisation. Ultimately the code for these delegates is expected to execute methods on our our Startup class.

The first delegate it will try to find is the ConfigureDelegate. This will be used to build the middleware pipeline for the application. Internally the StartupLoader uses a helper method called FindMethod to do most of the work. This is called from a FindConfigureDelegate method (full source on GitHub). The FindMethod is as follows (full source on GitHub):

This method will first work out the method name(s) it should be looking for on the Startup class based on the methodName parameter passed to it. The convention is that the method which defines the middleware pipeline should be called Configure. There is a lesser known convention in the Startup class which in addition to providing the standard “Configure” method, you can choose to include environment specific version(s) in your Startup class too. By convention, if a Configure{EnvironmentName} is found (e.g. “ConfigureProduction”) for the current environment, that method will be used in preference to the general Configure method.

In our sample, we only have the standard Configure method defined. Reflection is used to find a method matching the expected name on our Startup type. There are various checks in place to ensure that the expected members on our class are valid (i.e. we don’t have more than one Configure method defined with the same name) and that it has the expected Void return type.

Once we have the MethodInfo for the matching method, it is passed as the parameter into the constructor for a new ConfigureBuilder instance (full source on GitHub). This is stored in a variable in the LoadMethods method to be used a little later on.

A very similar process occurs to find and store the MethodInfo for ConfigureServices from our Startup class which is stored in a local variable called servicesMethod. Finally, the same approach is used looking for a ConfigureContainerDelegate. This is  an optional method which we can include on our Startup class to interact with 3rd party dependency injection containers such as AutoFac. We won’t look at this here.

Next, inside LoadMethods, a static ActivatorUtilities.GetServiceOrCreateInstance is called to get or create an instance of our Startup class. Here’s a compressed version of that LoadMethods method for reference (full source on GitHub):

As an implementation instance of IStartup is not currently stored in the DI container, GetServiceOrCreateInstance will create an instance of our Startup class by calling it’s constructor. In my sample (which matches the default Startup class in a new ASP.NET Core application template) it expects an IConfiguration object be passed in. The DI framework will have access to an implementation for this and will inject it in for us. Here’s my Startup constructor for reference:

Next the method on the ConfigureServicesBuilder is set as a callback variable. It gets passed the newly created Startup instance as its parameter. The same occurs to store a callback for ConfigureContainer. Next a Func<IServiceCollection, IServiceProvider> is setup using a lambda expression (which I’ve excluded from the code above for now). We’ll look at this when we see how this gets called a little later.

At this point, the callback delegates are passed into the constructor for a new StartupMethods instance which is then returned as the result of LoadMethods. This is then passed into the constructor for the new ConventionBasedStartup instance. At this point we have a concrete implementation of IStartup registered with the DI framework. Back inside the WebHost.EnsureApplicationServices method, the ConfigureServices method from the IStartup interface is called. Ready to start navigating some delegates!?

The ConfigureServices method on the ConventionBasedStartup instance calls the ConfigureServicesDelegate property on its StartupMethods member.

This executes the lambda defined code from the StartupMethods.LoadMethods method which in-turn invokes the Func<IServiceCollection, IServiceProvider> delegate returned from the ConfigureServicesBuilder.Build method, which ultimately calls the private ConfigureServicesBuilder.Invoke method (full source on GitHub).

The Invoke method uses reflection to get and inspect the parameters required by the ConfigureServices method defined on our Startup class. By convention this method can be either parameterless or take a single parameter of type IServiceCollection.

If the ConfigureServices method on our Startup class expects the IServiceCollection parameter, this is set using the IServiceCollection which was passed into the Invoke method. Once the method is configured via reflection it is invoked and the returned value will either be Void or an IServiceProvider. It’s at this point that we are actually executing the code contained in our ConfigureServices method on our Startup class. Our class can use the IServiceCollection extensions to register services and their implementations with the DI container.

At this point the lambda expression (in StartupLoader) wants to return a ServiceProvider. If our Startup.ConfigureServices method returned an IServiceProvider directly, this then gets returned immediately. If not, an IServiceProviderFactory is requested from the hostingServiceProvider and used to construct the ServiceProvider. This is the application level ServiceProvider that will be used to resolve dependencies in our code base.

The final point I’d like to show within WebHost.BuildApplication is how the final RequestDelegate is built. We won’t cover this in depth here, but in short, the RequestDelegate is defined as “A function that can process an HTTP request.” This is what the framework will actually use to process each request through our application. This will be setup to include all of the middleware as defined in our applications pipeline.

The relevant code in side of WebHost.BuildApplication is (full source on GitHub):

An IApplicationBuilderFactory is used to build up and finally surface our RequestDelegate. This is one of the services registered earlier in the WebHostBuilder.BuildCommonServices method.

The ApplicationServices property on the builder is set with the ServiceProvider that was just created. The next detail is something I’ll gloss over slightly as it goes a bit too far off the flow I want to explore. In short an IEnumerable of IStartupFilters may have been registered with the DI framework. In my sample I haven’t registered any so only the default AutoRequestServicesStartupFilter (full source on GitHub) will be returned from the ServiceProvider.

An Action<IApplicationBuilder> delegate variable is created holding a wrapped set of Configure methods from each IStartupFilter, the final one being the delegate for our Startup.Configure method. At this point, the Configuration chain is called which first hits the AutoRequestServicesStartupFilter.Configure method. This holds our delegate chain as its next action and so this will call down into the ConventionBasedStartup.Configure method. This will call the ConfigureDelegate on its local StartupMethods object.

Invoking that Action will call the private ConfigureBuilder.Invoke method (full source on GitHub) which looks like this:

This will prepare to call our Startup.Configure method sending in the appropriate parameters which get resolved from the ServiceProvider. Our Configure method can add middleware into the application pipeline using the IApplicationBuilder. The final RequestDelegate is built and returned from the IApplicationBuilder and the WebHost initialisation then completes.

Summary

This has been a quite long and deeply technical post. If you’ve stuck with it; well done! I hope I’ve interpreted everything correctly and lifted the curtain on some of the “magic” behind the scenes that makes ASP.NET Core work. It was quite difficult to explain everything due to the layers of delegates involved here. Hopefully I did a good enough job for you to get the gist of things. I find it really useful to dig into the code like this and gain a better understanding of the internals. If you want to explore the code yourself, check out the ASP.NET Core Hosting repository on GitHub.

Other posts in this series

Visit the ASP.NET Core Anatomy Index post to see the other deep dives covered in this series.