ASP.NET Core Anatomy – How does UseStartup work? Exploring how UseStartup results in your Startup methods being registered and executed.

I was recently explaining to someone the basics of the program flow for an ASP.NET Core application. One of the things included in the templates for ASP.NET Core and used very often is the UseStartup<T> extension method on the IWebHostBuilder. This gets called from our Program.cs when initialising the application. UseStartup allows us to set the Startup class which defines the services and middleware pipeline for an ASP.NET Core application.

During my explanation I realised that while I know the result of calling this method, I didn’t know how things are wired up under the hood; so I decided to investigate!

NOTE 1: This content is valid as at the 2.0.0 release codebase. I don’t expect that the fundamentals will change dramatically in the future but I have seen some commits which tweak the code a little for 2.1!

NOTE 2: This is a deep dive blog post looking at internal ASP.NET Core code. You don’t need to know this to use ASP.NET Core to build applications – please don’t let this scare you off! This is intended for those of you, who like me, have a curious mind about the internals of ASP.NET Core. I’m conscious that this may get quite hard to follow as we get deep into the guts of the code as there’s a lot of use of delegates that makes explaining the flow quite challenging. I’ll try my best to make it clear!

How are Startup methods registered and executed?

The generic UseStartup<TStartup> method calls down to the main IWebHostBuilder UseStartup(this IWebHostBuilder hostBuilder, Type startupType) extension, passing in the Type for the Startup class. That method looks like this (full source on GitHub):

The ConfigureServices method on the IWebHostBuilder is called which expects an Action<IServiceCollection> parameter. In this case it’s defined as a lambda expression which acts on the IServiceCollection. The WebHostBuilder class has a private List<Action<WebHostBuilderContext, IServiceCollection>> field named _configureServicesDelegates. The call to ConfigureServices in the code above will add the lambda as a new item to this list which will be used later to construct an instance of IStartup.

At this stage a list of delegates will have been registered within the WebHostBuilder. The main process begins when Build() is called on the WebHostBuilder. I won’t cover everything that Build does, since it’s not all relevant to the scope of this post. Here’s the code of that method (full source on GitHub):

The Build method calls a private method BuildCommonServices (full source on GitHub) which as the name suggests will add some common framework services into the current ServiceCollection. I’ll skip over most of the code in this method. Unless we’ve changed the WebHostOptions to define different assemblies to load Startup from, the execution flow will eventually hit code which looks over the _configureServicesDelegates List, calling each delegate in turn. That piece of code looks like this:

In the sample application which I used in order to debug through the Hosting codebase, I have two delegates registered, one from a FakeServer (needed so that the WebHostBuilder doesn’t throw an exception) and one from the UseStartup call. The first delegate simply registers the FakeServer as the implementation for IServer inside the ServiceCollection. The second delegate will now execute the lambda expression which was registered in the UseStartup method. Let’s remind ourselves what that looked like:

If our Startup class implements IStartup directly, it can and will be registered as the implementation type for IStartup directly. In my sample (which is based on the default ASP.NET Core templates) our Startup class does not implement IStartup and will rely on conventions instead. In this case an AddSingleton overload is used which takes the Func<IServiceProvider, object) as it’s implementation factory. This Func delegate will be called when the first concrete IStartup implementation is requested from the DI container.

At this point the registration of IStartup is included in the ServiceCollection and ready to be called by the framework. The WebHostBuilder.Build method continues to execute and constructs a new WebHost instance, which includes passing in an IServiceCollection (a clone of the current hostingServices variable). It also passes in a ServiceProvider, built using the current state of the hostingServices ServiceCollection. This represents the application services which have been registered so far by the framework.

Once we have a WebHost instance, its Initialize method is called. This calls down to a private BuildApplication method (full source on GitHub). Hold tight, lots of stuff starts to happen at this stage. I’ll try to pick out the parts I think are relevant to the use of our Startup class.

The BuildApplication method does some basic checks to make sure the relevant services are available for the application to start. One of the checks is that the ServiceProvider which was passed in includes an implementation for IStartup. This particular check happens inside a helper method called EnsureStartup()

The call to _hostingServiceProvider.GetRequiredService<IStartup>(); will trigger the DI framework to construct an instance of IStartup as per its registration. Due to the use of delegates, we need to head back to the UseStartup method to look at the lambda we passed in for the Func<IServiceProvider, object) implementationFactory. As a reminder, here’s the service registration that was used:

We can see that we’re going to get returned a newly constructed ConventionBasedStartup instance as the implementation for IStartup. The ConventionBasedStartup constructor accepts a StartupMethods object (full source on GitHub) as its parameter. A static StartupLoader.LoadMethods method (full source on GitHub) is used to generate a StartupMethods instance. This object acts as a holder for three properties.

The most important of these properties for this discussion are the delegates for the ConfigureServices and Configure. These will be setup with the code which should execute when the framework calls these methods further down in the WebHost initialisation. Ultimately the code for these delegates is expected to execute methods on our our Startup class.

The first delegate it will try to find is the ConfigureDelegate. This will be used to build the middleware pipeline for the application. Internally the StartupLoader uses a helper method called FindMethod to do most of the work. This is called from a FindConfigureDelegate method (full source on GitHub). The FindMethod is as follows (full source on GitHub):

This method will first work out the method name(s) it should be looking for on the Startup class based on the methodName parameter passed to it. The convention is that the method which defines the middleware pipeline should be called Configure. There is a lesser known convention in the Startup class which in addition to providing the standard “Configure” method, you can choose to include environment specific version(s) in your Startup class too. By convention, if a Configure{EnvironmentName} is found (e.g. “ConfigureProduction”) for the current environment, that method will be used in preference to the general Configure method.

In our sample, we only have the standard Configure method defined. Reflection is used to find a method matching the expected name on our Startup type. There are various checks in place to ensure that the expected members on our class are valid (i.e. we don’t have more than one Configure method defined with the same name) and that it has the expected Void return type.

Once we have the MethodInfo for the matching method, it is passed as the parameter into the constructor for a new ConfigureBuilder instance (full source on GitHub). This is stored in a variable in the LoadMethods method to be used a little later on.

A very similar process occurs to find and store the MethodInfo for ConfigureServices from our Startup class which is stored in a local variable called servicesMethod. Finally, the same approach is used looking for a ConfigureContainerDelegate. This is  an optional method which we can include on our Startup class to interact with 3rd party dependency injection containers such as AutoFac. We won’t look at this here.

Next, inside LoadMethods, a static ActivatorUtilities.GetServiceOrCreateInstance is called to get or create an instance of our Startup class. Here’s a compressed version of that LoadMethods method for reference (full source on GitHub):

As an implementation instance of IStartup is not currently stored in the DI container, GetServiceOrCreateInstance will create an instance of our Startup class by calling it’s constructor. In my sample (which matches the default Startup class in a new ASP.NET Core application template) it expects an IConfiguration object be passed in. The DI framework will have access to an implementation for this and will inject it in for us. Here’s my Startup constructor for reference:

Next the method on the ConfigureServicesBuilder is set as a callback variable. It gets passed the newly created Startup instance as its parameter. The same occurs to store a callback for ConfigureContainer. Next a Func<IServiceCollection, IServiceProvider> is setup using a lambda expression (which I’ve excluded from the code above for now). We’ll look at this when we see how this gets called a little later.

At this point, the callback delegates are passed into the constructor for a new StartupMethods instance which is then returned as the result of LoadMethods. This is then passed into the constructor for the new ConventionBasedStartup instance. At this point we have a concrete implementation of IStartup registered with the DI framework. Back inside the WebHost.EnsureApplicationServices method, the ConfigureServices method from the IStartup interface is called. Ready to start navigating some delegates!?

The ConfigureServices method on the ConventionBasedStartup instance calls the ConfigureServicesDelegate property on its StartupMethods member.

This executes the lambda defined code from the StartupMethods.LoadMethods method which in-turn invokes the Func<IServiceCollection, IServiceProvider> delegate returned from the ConfigureServicesBuilder.Build method, which ultimately calls the private ConfigureServicesBuilder.Invoke method (full source on GitHub).

The Invoke method uses reflection to get and inspect the parameters required by the ConfigureServices method defined on our Startup class. By convention this method can be either parameterless or take a single parameter of type IServiceCollection.

If the ConfigureServices method on our Startup class expects the IServiceCollection parameter, this is set using the IServiceCollection which was passed into the Invoke method. Once the method is configured via reflection it is invoked and the returned value will either be Void or an IServiceProvider. It’s at this point that we are actually executing the code contained in our ConfigureServices method on our Startup class. Our class can use the IServiceCollection extensions to register services and their implementations with the DI container.

At this point the lambda expression (in StartupLoader) wants to return a ServiceProvider. If our Startup.ConfigureServices method returned an IServiceProvider directly, this then gets returned immediately. If not, an IServiceProviderFactory is requested from the hostingServiceProvider and used to construct the ServiceProvider. This is the application level ServiceProvider that will be used to resolve dependencies in our code base.

The final point I’d like to show within WebHost.BuildApplication is how the final RequestDelegate is built. We won’t cover this in depth here, but in short, the RequestDelegate is defined as “A function that can process an HTTP request.” This is what the framework will actually use to process each request through our application. This will be setup to include all of the middleware as defined in our applications pipeline.

The relevant code in side of WebHost.BuildApplication is (full source on GitHub):

An IApplicationBuilderFactory is used to build up and finally surface our RequestDelegate. This is one of the services registered earlier in the WebHostBuilder.BuildCommonServices method.

The ApplicationServices property on the builder is set with the ServiceProvider that was just created. The next detail is something I’ll gloss over slightly as it goes a bit too far off the flow I want to explore. In short an IEnumerable of IStartupFilters may have been registered with the DI framework. In my sample I haven’t registered any so only the default AutoRequestServicesStartupFilter (full source on GitHub) will be returned from the ServiceProvider.

An Action<IApplicationBuilder> delegate variable is created holding a wrapped set of Configure methods from each IStartupFilter, the final one being the delegate for our Startup.Configure method. At this point, the Configuration chain is called which first hits the AutoRequestServicesStartupFilter.Configure method. This holds our delegate chain as its next action and so this will call down into the ConventionBasedStartup.Configure method. This will call the ConfigureDelegate on its local StartupMethods object.

Invoking that Action will call the private ConfigureBuilder.Invoke method (full source on GitHub) which looks like this:

This will prepare to call our Startup.Configure method sending in the appropriate parameters which get resolved from the ServiceProvider. Our Configure method can add middleware into the application pipeline using the IApplicationBuilder. The final RequestDelegate is built and returned from the IApplicationBuilder and the WebHost initialisation then completes.

Summary

This has been a quite long and deeply technical post. If you’ve stuck with it; well done! I hope I’ve interpreted everything correctly and lifted the curtain on some of the “magic” behind the scenes that makes ASP.NET Core work. It was quite difficult to explain everything due to the layers of delegates involved here. Hopefully I did a good enough job for you to get the gist of things. I find it really useful to dig into the code like this and gain a better understanding of the internals. If you want to explore the code yourself, check out the ASP.NET Core Hosting repository on GitHub.

Other posts in this series

Visit the ASP.NET Core Anatomy Index post to see the other deep dives covered in this series.

Docker for .NET Developers Header

Docker for .NET Developers (Part 1) An introduction to Docker for .NET developers

Two words you will very likely be used to hearing quite often within our community at the moment are “microservices” and “Docker”. Both are topics of great interest and are generating excitement for developers and architects. In this new series of blog posts I want to cover Docker, what it is, why it might be of interest and specifically look at what it means for .NET developers. As a little background; my experience with Docker started last year when we began building a new big data analytics system. The requirement was to gather many millions of events from hundreds of individual source systems into ElasticSearch, then provide a way for users to flexibly report over that data, in real time.

Without going deep into specifics, we developed a system of queues, input processors and multiple back end services which analyse the source data. These services work with the data, aggregating it, shaping it for custom reporting and we provide various API services used by a front end SPA UI written in Vue.js. In essence these are microservices since each one is small and self-contained, providing a specific set of related functionality.

We knew we wanted to use ASP.NET Core for the API elements and very soon after that decision, we realised we could also take advantage of the cross platform nature of .NET Core to support easier development for our front end team on their Mac devices.

Historically the front end developers have had to use a Windows VM to work with our projects written in .NET. They pull the latest platform code to their devices, build it and run it so that they can work on the UI elements. This process has some overhead and has been something that we have wanted to streamline for some time. With this fresh project we were able to think about and implement improvements in the process.

The solution that we came up with was to provide Docker images of the back end services that the front end developers could quickly spin up on their development environments. They can do this without the need to run a Windows VM and the result has been a great productivity gain.

As containers and Docker were working so well for use in development, we also decided to use them as our build and deploy process onto the live environment. We are using AWS ECS (EC2 Container Service) to run the containers in the cloud, providing a scalable solution to host the various components.

This is our first time working with Docker and it has been an interesting learning experience for everyone involved. We still have more to learn and I’m sure more ways we can improve the process even further but what we have now is already providing great gains for us.

What is Docker?

There are a lot of articles and videos available via a quick Google that discuss what Docker is. To try and distil the essence, Docker is a containerisation technology and application platform that lets us package and deploy an application or service as an isolated unit containing all of its dependencies. In over simplified terms it can be thought of as a very lightweight, self contained virtual machine.

Docker containers run on top of a shared OS kernel, but in an isolated way. They are very lightweight which is where they offer an advantage over traditional VMs. You can often make better use of the host device(s) by running more containers and better sharing the underlying resource. They have a lighter footprint, containing only the minimum dependencies that they require and they can share the host resources more effectively.

A Docker image can be as small as a few hundred megabytes and can start in a matter of a few seconds or even fractions of a second. This makes them great for scaling since extra containers can be started very rapidly in response to scaling triggers such as a traffic increase or growing queue. With traditional VM scaling you might have a few minutes wait before the extra capacity comes online, by which time the load peak could have caused some issues already.

Key Concepts

As this is an introduction post I wanted to summarise some of the core components and terms that you will need to know when beginning to work with Docker.

Image

A docker image can be considered a unit of deployment. Images are defined by the Docker files and once built are immutable. To customise an image further you can use it as the base image within your next dockerfile. Typically you store built images in a container registry which them makes them available for people to reference and run.

Container

A container is just a running instance of a Docker image. You start an image using the Docker run command and once started your host will have an instance of that image running.

Dockerfile

A dockerfile is how Docker images and the deployment of an application are described. It’s a basic file and you may only require a few lines to get started with your own image. Docker images are built up in layers. You choose a base image that contains the elements you need, and then copy in your own application on top. Microsoft provide a number of images for working with .NET Core applications. I’ll look into the ones we use in future posts.

The nice thing about using a simple text file to describe the images is that it’s easy to include the dockerfile in your repository under source control. We include various dockerfiles in our solutions that enable slightly different requirements.

Docker Compose

A Docker compose file is a basic way to orchestrate multiple images/containers. It uses the YML format to specify one or more containers that make up a single system, or part of a system. Within this file you specify the images that need to be started, what they depend on, what ports they should start under on the host etc. Using a single command you can build all of the images. With a second single command you can tell Docker to run all of the containers.

We use a docker compose file for our front end developers. We define the suite of back end components that need to be running for them to exercise and develop against the API services. They can quickly spin them up with a docker-compose run command to get started.

Host

The host is the underlying OS on which you will run Docker. Docker will utilise shared OS kernel resources to run your containers. Until recently the host would always have been a Linux device but Microsoft have now released Microsoft containers, so it’s possible to use a Windows device as a host for Windows based images. In our case we still wanted to support running the containers on Mac devices, so stayed with Linux based images. There are a couple of solutions to enable using the Linux images on Windows which I’ll go into more detail about in the future. In both cases, you essentially run Linux as a VM which is then your host.

Summary

This was a short post, intended to introduce Docker and some of the reasons that we started to use it for our latest project. ASP.NET Core and .NET Core lend themselves perfectly to cross platform development, and the use of Linux based Docker containers made sharing the back end components with our front end team a breeze. In the next posts I’ll go deeper into how we’ve structured our solutions and processes to enable the front end developers as well as showing building an example project.

Other Posts In This Series

Part 1 – This Post
Part 2 – Docker for .NET Developers Part 2 – Our First dockerfile
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry

Things I’ve Learnt This Week (19th February)

Week 4 of my series, sharing things I’ve learnt, read, watched and listened to, in the pursuit of expanding my knowledge about software development. A slightly shorter set of links this week, things have been fairly busy so I’ve had less time to keep up with all of the fantastic content.

Things I’ve Learned

That I enjoy technical speaking! This week, I delivered a talk to a room of developers about ASP.NET Core. It’s a popular topic which I think generated the interest and high attendance. I’m quite shy and an introvert by nature, and I generally hate any form of public speaking so presenting a talk is way out of my comfort zone. However, it’s something I’ve been keen to work on; It’s a great way to share information and helps me learn as well. I had the usual nerves leading up to the talk, but I’d practised and refined it over a number of rehearsals, to the point that I was confident in what I was delivering. The talk also included a 30 minute live demo, which thankfully worked perfectly thanks to many practice runs. As soon as I got going I started to feel better and by the end, was on a bit of a natural high. I’ve had some very nice feedback from some of the attendees and I’m encouraged to work on future talks as well as hopefully sharing this one with a wider audience in the future. If I was asked for advice from other developers looking to prepare a technical talk, it would be practice, practice, practice! Having gone through my slides, rehearsing the full talk at least 10 times, I knew what I was going to say and that allowed me to present confidently.

Things I’ve Read

Things I’ve Listened To

Things I’ve Watched

A Reminder to Take Care when Registering Dependencies

I looked into a “fun” little problem yesterday where we were seeing occasional errors in some of our ASP.NET Core code which calls down into the ASP.NET Core Identity UserManager. We were getting a range of NullReferenceException and ObjectDisposedExceptions as well as various exceptions from Npgsql.NpgsqlConnection (we use Postgres rather than SQL in this project) stating things such as “Connection already open”.

The issue was presenting itself within some authentication and authorisation code we have which wraps and extends the ASP.NET Core Identity UserManager and SignInManager functionality. We use ASP.NET Identity over a PostGres database for our user store but include some application specific functionality with our own code. We have our own AuthenticationManager and UserManager classes, both of which take dependencies on the underlying Microsoft.AspNetCore.Identity classes.

These originally got registered in in the ConfigureServices method of the Startup.cs class as follows:

services.AddSingleton<IAuthenticationManager, AuthenticationManager>();
services.AddSingleton<IUserManager, UserManager>();

The constructor for our AuthenticationManager looks a bit like this:

public AuthenticationManager(UserManager<ApplicationUser> userManager, SignInManager<ApplicationUser> signInManager)
{
   // setup our authentication manager here
}

Do you see the problem?

The issue here is the singleton registration. While our classes themselves have no state and could be shared between requests, the dependencies on Microsoft.AspNetCore.Identity.UserManager<T> and Microsoft.AspNetCore.Identity.SignInManager<T> have to be considered.

If we take a look at where these are registered within the Microsoft.AspNetCore.Identity source we can see the following:

services.TryAddScoped<UserManager<TUser>, UserManager<TUser>>();
services.TryAddScoped<SignInManager<TUser>, SignInManager<TUser>>();

They are added using the scoped lifetime which means they expect to be created once per request. They themselves depend on an IUserStore which is registered with the scoped lifetime as well.

As a result, the singleton registration of our AuthenticationManager was trying to hang onto dependencies to objects for the entire application lifetime, where those dependencies only expected to live for the request scope. Sometimes they seemed to get a different database context and hence the “Connection already open” errors we saw. Sometimes the dependencies had been disposed of by the time they got called by our code and as such we saw the various exceptions being thrown. In some cases we seemed to still have access to the UserManager but the context underneath was null. I won’t dive into this tool deeply, but it was apparent that we should really make the registration of our classes scoped as well. This way, they are created once per request, the same as their dependencies. We changed the registration code to the following:

services.AddScoped<IAuthenticationManager, AuthenticationManager>();
services.AddScoped<IUserManager, UserManager>();

This resolved the errors we were seeing immediately. It was an annoying oversight although fortunately it was fairly easy to guess at the cause. The various errors all suggested that we having some issues with the lifetime of the  dependencies. This case has been a reminder to carefully consider the lifetimes of service registrations as it can produce some unexpected errors and behaviour.

The Microsoft documentation even includes a large warning about scoped services – “The main danger to be wary of is resolving a Scoped service from a singleton. It’s likely in such a case that the service will have incorrect state when processing subsequent requests.” – Oh how true this is!

Happy dependency injecting!

Things I’ve Learnt This Week (5th February)

I’m keeping up my plan (week 2 yay!) to record and share things I’ve learned during the last week. Less from me this week as it’s been fairly hectic and I’ve been feeling a bit unwell.

Things I’ve Learned

Identity Server 4

As part of my work for Humanitarian Toolbox I’ve been actively investigating Identity Server 4 as an option to handle our authentication. The allReady application currently uses ASP.NET Core Identity within the application to support login and user management. As the product nears v1 release discussions have begun around the use cases for the application, including potential for multi-tenancy and considerations around storage of user accounts. As a result this led Richard Campbell to suggest we look into Identity Server 4 to help with this identity flow.

I was lucky enough to have an audience with Brock Allen and Dominick Baier this week to chat through some of the basics about Identity Server. It was really useful to chat with them both and I want to thank them for offering their time and support to the project. One of the key take-aways for me from the call was getting a better understanding of where Identity server fits into the puzzle. It’s about authentication and helping with the protocol of OAuth2 / OpenId communications. It’s not a user management / user store product, although it does sit nicely on top of ASP.NET Core Identity 3 as an option.

As I continue investigating and testing Identity Server 4, I hope to put together some more details posts about how we’re using it and what I learn along the way.

Things I’ve Read

In no particular order here’s some of the blogs and posts that I’ve read this week.

Things I’ve Listened To

Things I’ve Watched