Docker for .NET Developers Header

Docker for .NET Developers (Part 2) Taking a look at our first dockerfile and building an image for an ASP.NET Core API service

In the first post in the series I introduced a few concepts and key terms you’ll need to know if you want to work with Docker. In this post I will cover the steps needed to begin working with Docker in your development environment. We’ll look at creating a very basic sample API service and a dockerfile that defines an image which can run the application inside a Linux container.

Getting Docker

Before we can begin using Docker we need to install it. There are a couple of options, if like me, you’re developing on Windows.

Docker for Windows

If you’re running Windows 10 Pro, your best option is to download and install Docker for Windows. You can get it from

Docker for Windows supports both Windows and Linux containers. To follow along, you’ll need to ensure that Docker is running in Linux mode which is what I’ll be using in my initial examples. To switch between the modes you can right click the Docker tasktray icon…

Switch between Linux and Windows with Docker for Windows

When running with Linux contains it will start a Linux VM for you inside Hyper-V. Once Docker for Windows is running you can use Powershell to run Docker commands that will be passed through to the Linux host.

Docker Toolbox

If you don’t have Windows 10 Professional, don’t worry, there is another option. Docker also provide Docker Toolbox which includes VirtualBox, a virtualisation product from Oracle which can be installed freely. You can use this on earlier versions of Windows as well as Windows 10 Home.

You can download and install Docker Toolbox from

Docker Toolbox will create and load a small Linux VM for you inside VirtualBox which will then become the host for Docker. This does add a layer of complexity as you may need to configure port forwarding from the VirtualBox VM host out into your Windows environment. You end up with another layer to manage, but once you’re up and running it’s fairly easy to work with.

Once you have it installed you can run the Docker Quickstart Terminal shortcut to start the Linux VM and attach to it. Once that loads you can run Docker commands on the VM from the bash shell.

Our First Dockerfile

To demo the process of manually creating a dockerfile I’m going to build up a small sample API application. Inside a new empty directory I have created an ASP.NET Core 1.1 API project. This is just a default API project from the Visual Studio templates. I like to structure my solutions in a similar way to the Microsoft repositories so I do move a few things around. In my root folder I have my solution file. I then have a “src” folder, inside which I include any projects that are part of the solution.

With a basic solution in place, I like to create a “docker” solution folder inside Visual Studio and inside that I create a new text file named dockerfile (without an extension). I locate the dockerfile physically in the root of my solution, alongside the sln file.

Folder structure of our sample docker solution

We can now create our first dockerfile by editing the empty file. I work with it via Visual Studio, but it’s just a plain text file so you can use any editor you like. For now we’re going to create a naïve and quite basic dockerfile to demonstrate some of the main concepts. In later posts I’ll be showing a more optimal way to build and structure our dockerfile(s). The initial dockerfile looks like this:

FROM microsoft/aspnetcore-build:1.1

Docker images are like onions and are layered up from multiple base images. Each new image builds on top of the previous image until a complete image it built, containing all of the components it needs. As a result, every dockerfile you produce will start with a FROM statement, which defines its base image.

In this example I’m using a Microsoft maintained image for aspnetcore called aspnetcore-build. This particular image includes the .NET Core SDK to enable building/publishing the code. I also specify a tag after the colon. All images can be tagged with zero or more tags. This tag specifies that I want the image containing the 1.1 SDK for ASP.NET Core. If I did not include a specific tag, I would get the image tagged with latest, which at the time of writing this is the 1.1.x stream.

These base public images are hosted online at DockerHub. This is a Docker registry that is commonly used for hosting public images. A registry like this can be thought of in similar terms to NuGet for .NET packages. Docker knows to search the DockerHub registry for any images you specify. If you don’t already have a local copy of the image cached locally, it will be pulled from the registry.


The next line sets our working directory inside the container we are building. Any actions we perform will affect that working directory. As each line of the dockerfile is executed, it creates a new intermediate image, building up layers until you have your final image. We’ll look at this in more detail in a future post to see how we can optimise the layering to reduce the number of intermediate images and the time it takes to build a new image.

COPY . .

Next we perform a copy command. When copying you specify the source (on the host) and destination (in the image) of the files to copy. In this case, by using periods we’re copying from the host at the path where the Docker commands are being executed, which will be the root of our solution directory. In the image since we also used a period, we are copying directly into the working directory which in our case is /app. 

RUN dotnet restore
RUN dotnet build

Next we execute two run dotnet commands. The first runs the dotnet restore command which will perform a package restore from Nuget for all dependencies of our solution. Next I run a dotnet build command to produce the default application build.

WORKDIR /app/src/DockerDotNetDevsSample1

Next we switch the working directory to the directory containing the copied in project file.

ENTRYPOINT dotnet run

Finally we define the entry point to the image. This is the instruction to the image on how to start the process that it will run for us. In this case we tell it to execute the dotnet run command which will start up the ASP.NET Core API, hosted on Kestrel, and begin listening. By default the base aspnetcore image will set an environment variable that will tell the webhost to listen port 80 within the container.

Building the image

Now that we have a dockerfile defining our image we can use the Docker commands to create the image for us. On Windows when running Docker for Windows we can run the Docker commands directly from a Powershell window. I opened up Powershell and navigated to the root of our sample solution.

From there I run the build command:

docker build -t sample1 .

The inclusion of the -t option allows me to specify a tag for the image which will make working with it easier later on. The dot (period) at the end of the statement is important and tells Docker where to build from. In this case as I’m in the solution root already and my dockerfile is located there I can use a dot to represent the same location.

Docker will now begin to build my image.

Powershell output of docker build

Why is it downloading so much?

As I touched on already, Docker is based on layers of images. Each dockerfile specifies a FROM image which is its base image. It makes a small immutable change which is then available as the basis of the next layer. The aspnetcore-build image is exactly the same. It is based on a dotnetimage from Microsoft and below that a few other layers until we get to the initial Debian Linux image. When we include the aspnetcore image in our FROM command it will pull down the required layers of images from the DockerHub. Each of these is cached on the host machine so they can be quickly reused when possible. Some of these image layers will be very small as they make incremental changes to their base images. In my example I had explicitly cleared all of the images on my machine so I could show this first time download of the images.

Docker Build Output

Here’s the full build output from my sample docker build:

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker build -t sample1 .
Sending build context to Docker daemon 1.993 MB
Step 1/7 : FROM microsoft/aspnetcore-build:1.1
1.1: Pulling from microsoft/aspnetcore-build
10a267c67f42: Pull complete
fb5937da9414: Pull complete
9021b2326a1e: Pull complete
5df21d865eab: Pull complete
e4db626d1d21: Pull complete
87b3f796757a: Pull complete
629d4f39b75b: Pull complete
21c29d072c6e: Pull complete
39d6d7136f1b: Pull complete
74021b8a9867: Pull complete
Digest: sha256:9251d6953ca2fccfee1968e000c78d90e0ce629821246166b2d353fd884d62bf
Status: Downloaded newer image for microsoft/aspnetcore-build:1.1
---> 3350f0076aca
Step 2/7 : WORKDIR /app
---> 93515c761d80
Removing intermediate container c78aa9397ee7
Step 3/7 : COPY . .
---> 8125a8d08325
Removing intermediate container 6d3db0a39d6a
Step 4/7 : RUN dotnet restore
---> Running in d0d8fa97f402
Restoring packages for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj...
Restoring packages for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj...
Installing System.IO.Pipes 4.0.0.
Installing System.Xml.XPath.XmlDocument 4.0.1.
Installing System.Resources.Writer 4.0.0.
Installing System.Runtime.Serialization.Xml 4.1.1.
Installing System.Diagnostics.TraceSource 4.0.0.
Installing Microsoft.NETCore.Jit 1.0.2.
Installing Microsoft.Build 15.1.548.
Installing Microsoft.Build.Tasks.Core 15.1.548.
Installing Microsoft.Build.Utilities.Core 15.1.548.
Installing Microsoft.Build.Framework 15.1.548.
Installing Microsoft.NETCore.Runtime.CoreCLR 1.0.2.
Installing Microsoft.NETCore.DotNetHostPolicy 1.0.1.
Installing Microsoft.Build.Runtime 15.1.548.
Installing Microsoft.NETCore.App 1.0.0.
Installing NuGet.Frameworks 3.5.0.
Installing Microsoft.Extensions.CommandLineUtils 1.0.1.
Installing Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0.
Restore completed in 3.87 sec for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj.
Installing Microsoft.AspNetCore.Cryptography.Internal 1.1.1.
Installing Microsoft.AspNetCore.DataProtection.Abstractions 1.1.1.
Installing Microsoft.DotNet.PlatformAbstractions 1.1.1.
Installing Microsoft.AspNetCore.Razor 1.1.1.
Installing Microsoft.AspNetCore.DataProtection 1.1.1.
Installing Microsoft.Extensions.DependencyModel 1.1.1.
Installing Microsoft.AspNetCore.ResponseCaching.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Authorization 1.1.1.
Installing Microsoft.AspNetCore.Mvc.Abstractions 1.1.2.
Installing Microsoft.Extensions.Globalization.CultureInfoCache 1.1.1.
Installing Microsoft.Extensions.Localization.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Razor.Runtime 1.1.1.
Installing Microsoft.AspNetCore.WebUtilities 1.0.0.
Installing Microsoft.Extensions.ObjectPool 1.0.0.
Installing Microsoft.Net.Http.Headers 1.1.1.
Installing Microsoft.AspNetCore.Antiforgery 1.1.1.
Installing Microsoft.Extensions.Logging.Debug 1.1.1.
Installing Microsoft.AspNetCore 1.1.1.
Installing Microsoft.ApplicationInsights.AspNetCore 2.0.0.
Installing Microsoft.AspNetCore.Mvc 1.1.2.
Installing Microsoft.AspNetCore.Server.Kestrel 1.1.1.
Installing Microsoft.Extensions.Logging.Console 1.1.1.
Installing Microsoft.Extensions.Configuration.EnvironmentVariables 1.1.1.
Installing Microsoft.Extensions.Configuration.Json 1.1.1.
Installing Microsoft.Extensions.Configuration.FileExtensions 1.1.1.
Installing Microsoft.AspNetCore.Routing 1.1.1.
Installing Microsoft.Extensions.WebEncoders 1.1.1.
Installing Microsoft.AspNetCore.Server.IISIntegration 1.1.1.
Installing Microsoft.AspNetCore.Html.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Hosting 1.1.1.
Installing Microsoft.AspNetCore.JsonPatch 1.1.1.
Installing Microsoft.AspNetCore.Cors 1.1.1.
Installing Microsoft.AspNetCore.Mvc.Core 1.1.2.
Installing Microsoft.AspNetCore.Diagnostics 1.1.1.
Installing Microsoft.Extensions.Options.ConfigurationExtensions 1.1.1.
Installing Microsoft.Extensions.Configuration 1.0.0.
Installing Microsoft.Extensions.DiagnosticAdapter 1.0.0.
Installing Microsoft.ApplicationInsights 2.2.0.
Installing Microsoft.Extensions.Configuration.Json 1.0.0.
Installing Microsoft.AspNetCore.Hosting 1.0.0.
Installing Microsoft.AspNetCore.Mvc.TagHelpers 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Razor 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Localization 1.1.2.
Installing Microsoft.AspNetCore.Mvc.DataAnnotations 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Cors 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Formatters.Json 1.1.2.
Installing Microsoft.AspNetCore.Mvc.ApiExplorer 1.1.2.
Installing Microsoft.AspNetCore.Mvc.ViewFeatures 1.1.2.
Installing Microsoft.Extensions.Configuration 1.1.1.
Installing Microsoft.Extensions.FileProviders.Physical 1.1.0.
Installing Microsoft.Extensions.ObjectPool 1.1.0.
Installing Microsoft.AspNetCore.Routing.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Http.Extensions 1.1.1.
Installing Microsoft.AspNetCore.Localization 1.1.1.
Installing Microsoft.AspNetCore.HttpOverrides 1.1.1.
Installing Microsoft.AspNetCore.Http 1.1.1.
Installing Microsoft.AspNetCore.WebUtilities 1.1.1.
Installing Microsoft.Extensions.Localization 1.1.1.
Installing Microsoft.AspNetCore.Diagnostics.Abstractions 1.1.1.
Installing Microsoft.Extensions.Configuration.Binder 1.1.1.
Installing Microsoft.Extensions.Configuration.FileExtensions 1.0.0.
Installing Microsoft.Extensions.Configuration.EnvironmentVariables 1.0.0.
Installing Microsoft.Extensions.Options 1.0.0.
Installing Microsoft.Extensions.Logging 1.0.0.
Installing Microsoft.Extensions.DependencyInjection 1.0.0.
Installing Microsoft.AspNetCore.Http 1.0.0.
Installing Microsoft.Extensions.FileSystemGlobbing 1.1.0.
Installing Microsoft.Extensions.FileProviders.Composite 1.1.0.
Installing Microsoft.AspNetCore.Mvc.Razor.Host 1.1.2.
Generating MSBuild file /app/src/DockerDotNetDevsSample1/obj/DockerDotNetDevsSample1.csproj.nuget.g.props.
Writing lock file to disk. Path: /app/src/DockerDotNetDevsSample1/obj/project.assets.json
Restore completed in 5.16 sec for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj.

NuGet Config files used:

Feeds used:

86 package(s) to /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj
---> 3ad561c9b58d
Removing intermediate container d0d8fa97f402
Step 5/7 : RUN dotnet build
---> Running in ae5eb32e269f
Microsoft (R) Build Engine version 15.1.1012.6693
Copyright (C) Microsoft Corporation. All rights reserved.

DockerDotNetDevsSample1 -> /app/src/DockerDotNetDevsSample1/bin/Debug/netcoreapp1.1/DockerDotNetDevsSample1.dll

Build succeeded.
0 Warning(s)
0 Error(s)

Time Elapsed 00:00:02.80
---> 87dfa1483f4e
Removing intermediate container ae5eb32e269f
Step 6/7 : WORKDIR /app/src/DockerDotNetDevsSample1
---> de5e09dfdc89
Removing intermediate container 05bf88ae0454
Step 7/7 : ENTRYPOINT dotnet run
---> Running in 5c580412a46a
---> f04465a14c84
Removing intermediate container 5c580412a46a
Successfully built f04465a14c84
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

As you will see, the build process executes each line from our dockerfile in order. At each stage a new intermediate container is built and the change from the next command applied. You can see the output of the package restore occurring for example, and the final build of the dotnet solution.

Running the container

Now that we have an image we can start up one or more containers using that image. A container is a running instance of an image. We can run the image using the following command:

docker run -p 8080:80 sample1

This tells Docker to run the image called sample1. Because we tagged our image with that name it’s easy for us to now start a container instance. Without a tag we would have had to use part of the randomly generated id instead. I also include a -p option which tells Docker that we want to expose a port from the running container through to the host. We define the port on the host that we want to use, and the port on the container we want to pass through. By default no ports are exposed which helps make containers secure.

Here is the output we see when running the above command:

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker run -p 8080:80 sample1
Hosting environment: Production
Content root path: /app/src/DockerDotNetDevsSample1
Now listening on: http://+:80
Application started. Press Ctrl+C to shut down.

Now that the container is running and a port has been mapped onto the host we can call the API. To test this I used Postman to build a request to send to the API, exposed to us on port 8080.

Postman testing of our Docker API service

The above command starts our container from our sample1 image, but as you will have noticed, we were joined to its terminal, so we saw it’s console output. That happens by default since Docker run will attach to the container we are starting. This is fine for testing, but often we are not concerned with spitting out the console to our host.

A more common option is to start containers in detached mode:

docker run -d -p 8080:80 sample1

The -d option tells Docker that we want to start the container in detached mode. This means we won’t see the console output streamed from the container. It’s running, but in the background.

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker run -d -p 8080:80 sample1

We get shown the id for the container which we can then use to perform commands against it. For example, should we want to check what’s happening inside the container we can use the logs command to show the latest console messages.

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker logs c6
Hosting environment: Production
Content root path: /app/src/DockerDotNetDevsSample1
Now listening on: http://+:80
Application started. Press Ctrl+C to shut down.
PS E:\Software Development\Projects\DockerDotNetDevsSample1>

When sending commands we can use a shortened id to target a container. You only need to send the smallest amount of characters needed to uniquely identify the container. As nothing else is running with an id of c6 we can shorten the id significantly.

To stop the container we can use the command “docker stop c6” which will instruct it to gracefully end the process and stop.


In this post we’ve looked at the options to install and run Docker on a Windows device. We then looked at creating a basic solution and including a dockerfile so that we could define a Docker image intended to run our application. Finally we looked at how we can start a container from the image and some common commands to work with it. In future posts I’ll expand on this further and show how we’re using docker-compose to orchestrate multiple containers during development. We’ll then look at how we can create a CI build pipeline using Jenkins and eventually look at using AWS ECS to host live services running in containers.  

If you want to try out the code above for yourself I have uploaded the source from this post to GitHub.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – This Post
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 1) An introduction to Docker for .NET developers

Two words you will very likely be used to hearing quite often within our community at the moment are “microservices” and “Docker”. Both are topics of great interest and are generating excitement for developers and architects. In this new series of blog posts I want to cover Docker, what it is, why it might be of interest and specifically look at what it means for .NET developers. As a little background; my experience with Docker started last year when we began building a new big data analytics system. The requirement was to gather many millions of events from hundreds of individual source systems into ElasticSearch, then provide a way for users to flexibly report over that data, in real time.

Without going deep into specifics, we developed a system of queues, input processors and multiple back end services which analyse the source data. These services work with the data, aggregating it, shaping it for custom reporting and we provide various API services used by a front end SPA UI written in Vue.js. In essence these are microservices since each one is small and self-contained, providing a specific set of related functionality.

We knew we wanted to use ASP.NET Core for the API elements and very soon after that decision, we realised we could also take advantage of the cross platform nature of .NET Core to support easier development for our front end team on their Mac devices.

Historically the front end developers have had to use a Windows VM to work with our projects written in .NET. They pull the latest platform code to their devices, build it and run it so that they can work on the UI elements. This process has some overhead and has been something that we have wanted to streamline for some time. With this fresh project we were able to think about and implement improvements in the process.

The solution that we came up with was to provide Docker images of the back end services that the front end developers could quickly spin up on their development environments. They can do this without the need to run a Windows VM and the result has been a great productivity gain.

As containers and Docker were working so well for use in development, we also decided to use them as our build and deploy process onto the live environment. We are using AWS ECS (EC2 Container Service) to run the containers in the cloud, providing a scalable solution to host the various components.

This is our first time working with Docker and it has been an interesting learning experience for everyone involved. We still have more to learn and I’m sure more ways we can improve the process even further but what we have now is already providing great gains for us.

What is Docker?

There are a lot of articles and videos available via a quick Google that discuss what Docker is. To try and distil the essence, Docker is a containerisation technology and application platform that lets us package and deploy an application or service as an isolated unit containing all of its dependencies. In over simplified terms it can be thought of as a very lightweight, self contained virtual machine.

Docker containers run on top of a shared OS kernel, but in an isolated way. They are very lightweight which is where they offer an advantage over traditional VMs. You can often make better use of the host device(s) by running more containers and better sharing the underlying resource. They have a lighter footprint, containing only the minimum dependencies that they require and they can share the host resources more effectively.

A Docker image can be as small as a few hundred megabytes and can start in a matter of a few seconds or even fractions of a second. This makes them great for scaling since extra containers can be started very rapidly in response to scaling triggers such as a traffic increase or growing queue. With traditional VM scaling you might have a few minutes wait before the extra capacity comes online, by which time the load peak could have caused some issues already.

Key Concepts

As this is an introduction post I wanted to summarise some of the core components and terms that you will need to know when beginning to work with Docker.


A docker image can be considered a unit of deployment. Images are defined by the Docker files and once built are immutable. To customise an image further you can use it as the base image within your next dockerfile. Typically you store built images in a container registry which them makes them available for people to reference and run.


A container is just a running instance of a Docker image. You start an image using the Docker run command and once started your host will have an instance of that image running.


A dockerfile is how Docker images and the deployment of an application are described. It’s a basic file and you may only require a few lines to get started with your own image. Docker images are built up in layers. You choose a base image that contains the elements you need, and then copy in your own application on top. Microsoft provide a number of images for working with .NET Core applications. I’ll look into the ones we use in future posts.

The nice thing about using a simple text file to describe the images is that it’s easy to include the dockerfile in your repository under source control. We include various dockerfiles in our solutions that enable slightly different requirements.

Docker Compose

A Docker compose file is a basic way to orchestrate multiple images/containers. It uses the YML format to specify one or more containers that make up a single system, or part of a system. Within this file you specify the images that need to be started, what they depend on, what ports they should start under on the host etc. Using a single command you can build all of the images. With a second single command you can tell Docker to run all of the containers.

We use a docker compose file for our front end developers. We define the suite of back end components that need to be running for them to exercise and develop against the API services. They can quickly spin them up with a docker-compose run command to get started.


The host is the underlying OS on which you will run Docker. Docker will utilise shared OS kernel resources to run your containers. Until recently the host would always have been a Linux device but Microsoft have now released Microsoft containers, so it’s possible to use a Windows device as a host for Windows based images. In our case we still wanted to support running the containers on Mac devices, so stayed with Linux based images. There are a couple of solutions to enable using the Linux images on Windows which I’ll go into more detail about in the future. In both cases, you essentially run Linux as a VM which is then your host.


This was a short post, intended to introduce Docker and some of the reasons that we started to use it for our latest project. ASP.NET Core and .NET Core lend themselves perfectly to cross platform development, and the use of Linux based Docker containers made sharing the back end components with our front end team a breeze. In the next posts I’ll go deeper into how we’ve structured our solutions and processes to enable the front end developers as well as showing building an example project.

Other Posts In This Series

Part 1 – This Post
Part 2 – Docker for .NET Developers Part 2 – Our First dockerfile
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry

ASP.NET Core Correlation IDs Writing a basic middleware library to enable correlation IDs on ASP.NET Core

On a recent project we needed to implement the concept of correlation IDs across some ASP.NET Core API services. In our scenario we have an API service called from a front end JavaScript application which will then call one or more secondary back end API services to gather data. This is becoming a pretty common requirement when we have the concept of “microservices” which communicate over HTTP.

Why we need correlation IDs?

A problem that arises with having multiple services separated into distinct units is how we track the flow of a single user request through each of the individual services that might be involved in generating a response. This is particularly crucial for logging and diagnosing faults with the system as a whole. Once the request leaves the first API service and passes control onto the backend API service, we start to struggle to interpret the logs. A failure in the backend API will likely impact the front end API service but those might appear to be two unrelated errors. We need a way to view the entire flow from when the request hit the front end API, through to the backend API and back again.


This is where the concept of correlation IDs comes into play. A correlation ID is simply a unique identifier that is passed through the entire request flow and crucially is passed between the services. When each service needs to log something, it can include this correlation ID, thereby ensuring that we can track a full user request through from start to finish. We can parse all of the logs from each distinct part of the system, looking for the correlation ID. We can then form a full picture of the lifetime of that request and hopefully identify root causes of any errors.

In this post I’m going to discuss a very simple library that I wrote to help solve this problem. It’s a pretty simple implementation and fits specifically the requirement we needed to achieve. I have some thoughts about extending it to make working with it a bit simpler. I’d like to expose the correlation ID via some kind of context class that allows you to more easily access the ID without the dependency on IHttpContextAccessor in places where the HttpContext is not already exposed. For now, let’s look at how this initial version works.

I hope this will serve as a nice real-world example of how it’s quite simple to build and package a middleware library that you can re-use across your own projects.

Correlation ID Options

As part of the solution I wanted to provide a mechanism to configure the middleware and so I followed the standard pattern used in ASP.NET Core to provide a class representing the configuration properties. My middleware is pretty straightforward so only has two options.

There is an option to control the name of the request header where the correlation ID will be passed between services. Both the sending API and receiving API need to use the same header name to ensure they can locate and pass on the correlation ID. I default this to “X-Correlation-ID” which is a standard name that I found for this type of header.

The second option controls whether the correlation ID is included in the HttpResponse. I included this for our scenario so that the final front end API can pass the ID down to the front end SPA code. This allows us to tie up and client side logging with the backend logs. This feature is enabled by default but can be disabled in the configuration.


The main part of the solution involves a piece of middleware and an extension method to provide an easy way to register the middleware into the pipeline. The structure of this code follows the patterns defined as part of ASP.NET Core. The middleware class looks like this:

The constructor accepts our configuration via the IOptions<T> concept which is part of the ASP.NET Core framework. We will use these options values to control the behaviour of the middleware.

The main Invoke method, which will be called by the framework, is where the “magic” happens.

First we check for an existing correlation ID coming in via the request headers. We check for a header matching the name configured in the options and if we find a matching value we set the TraceIdentifier property on the HttpContext. I take advantage of one of the improvements from C# 7 which means we can declare the out variable inline, rather than having to pre-declare it before the TryGetValue line.

If we do not find a correlation ID on the request header we just utilise the existing TraceIdentifier which is generated automatically.

The next block is optional, controlled by the IncludeInResponse configuration property. If true (the default) we use the OnStarting callback to allow us to safely include the correlation ID header in the response.

Extension Method

The final piece of code needed (although not absolutely required) is to include some extension methods to make adding the middleware to the pipeline easier for consumers. The code in that class looks like this:

It provides a UseCorrelationId method (and two overloads) which are extension methods on the IApplicationBuilder. These will register the CorrelationIdMiddleware and if provided handle taking a custom name for the header, or a complete CorrelationIdOptions object which will ensure the middleware behaves as expected.

With this in place, all that a consumer of the library needs to do is include it in their project and then call the appropriate UseCorrelationId method from the Configure method in the startup class.


Middleware presents a very easy way to build the logic required by the concept of correlation IDs into and ASP.NET Core application. With a few small components I was able to put together a basic solution to solve our immediate problem. This was my first time releasing my own library as open source and publishing a public Nuget package. I’ve decided to summarise the steps I took for that in my next blog post.

If you want to review the full source, you can check it out on GitHub.

If you want to pull in the NuGet package to your work, you can find it up on

Getting started with ASP.NET Core 2.0 Preview 1 A quick tour of the main structural changes

At Microsoft Build 2017 yesterday, the formal announcement of ASP.NET Core 2.0 and .NET Core 2.0 was made.

ASP.NET Core 2.0 preview announcement

.NET Core 2.0 preview announcement

For me it was no surprise that we were close with these announcements as I keep an eye on the ASP.NET Core repositories on GitHub where there have been preview 1 branches for a little while. Additionally, the team have shared some information on the weekly ASP.NET Community Standup shows that have helped us understand some of the changes coming our way.

In this post I want to focus in on a few changes to the basic structure of ASP.NET Core 2.0 applications which simplify the code needed to get started. Before looking at those elements, I felt it would be worth sharing the steps I took to get started with ASP.NET Core 2.0 Preview 1 on my development machine.

Getting started with ASP.NET Core 2.0 preview 1

The first step was to get the preview SDK for .NET Core 2.0 which can be safely installed alongside any prior 1.x SDK versions. The announcement post provides a link to download the SDK.

The next step was to install the SDK. Nice and easy!

Installing .NET Core 2.0 SDK

Initially I tried creating a project in the existing Visual Studio 2017 IDE. I did so by starting with an ASP.NET Core 1.1 project and simply updating it to target netcoreapp2.0 and the latest ASP.NET Core 2.0 packages. However, this was problematic since I couldn’t restore or build the solution inside Visual Studio. I did manage to do both of those tasks at the command line level using dotnet restore and dotnet build. Finally, running dotnet run I was up and running with a small sample API site. 

After querying the lack of Visual Studio support on Twitter, Damian Edwards confirmed that I would need to use the preview version of Visual Studio 2017 in order to work with ASP.NET 2.0 preview projects. So I went off to download and then install it. According to their website this is safe to install side-by-side with other versions. If you want to follow along, download Visual Studio 2017 preview from here.

Installing Visual Studio 2017 preview

Now that I had VS 2017 preview installed on my machine I decided to start fresh and try creating a new project directly inside the IDE. The new project dialog for an ASP.NET Core web project has been updated so I could simply choose to target that from the UI.

ASP.NET Core 2.0 project template

After a few moments I was up and running, working on a new ASP.NET 2.0 web application. A very painless experience so well done and thanks to the team for the work they’ve done to get us started on this preview release.

A Quick Tour of ASP.NET Core 2.0 preview 1

Now that I have a default ASP.NET Core 2.0 project on my machine, I want to highlight a few of the initial changes that you’ll see.

Simplified CSPROJ

After the move back to csproj that came with Visual Studio 2017, the new structure for these project files has been a great improvement on the past. Gone are the ugly GUIDs and the requirement to specify all files. With this release, the team have simplified things a step further.

In the mainstream Visual Studio 2017 release, targeting ASP.NET Core 1.1 a default csproj file looks like this:

In the latest version, we now have a single meta-package we reference to bring in all of the ASP.NET Core packages. This reduces a default csproj to just these few lines:

At first this concerned me a little. A key message behind the original ASP.NET Core release, was a “pay to play” model. You could specify only the packages you actually needed in your application, no longer bringing a dependency on a whole monolith framework. You could bring each version of each component that you actually used. This also resulted in a smaller footprint for your published application. My first impression was that we were taking a bit of a step back by instead referencing a single master package.

However after reading the announcement and chatting on Twitter, the consensus seems to be that this is not a problem. The SDK ships with the packages included with it, so you are never pulling these from NuGet directly. That had been a concern for me since at work we build our projects inside Docker containers. If each new build needed to pull all of the packages, even those we didn’t need, across the Internet, it could slow down our builds.

However, since they are local with the SDK, this is not a problem. Then I had a concern about a large deployment when you publish your application, including binaries that my application doesn’t even use. This also seems to be solved, as Microsoft now have a package trimming feature that they explain will exclude the binaries you are not using for your publish output. What remains to be seen is how the base packages can version truly independently since the meta package will define specific versions it depends on. When you depend on a version of AspNetCore.All you depend on the versions it defines. It seems that this might now require a new SDK release each time the underlying packages change so that you can work with a new version of the meta-package, while still having the packages locally.

Generally though, this new simplified file should make it very easy for people working outside of VS 2017 to get up and running with a hand coded project file if they so wish.

Another change you’ll notice is that the TargetFramework is now netcoreapp2.0 which defines that this project will run on .NET Core 2.0.

Simplified Program.cs

Next up, program CS got some refinement. To compare, in an ASP.NET Core 1.1 application, the class looks like this:

Now in 2.0 it looks like this:

This revised version utilises a C#6 syntax to build an expression body method called BuildWebHost. This method uses a new static method WebHost.CreateDefaultBuilder. This method provides a shortcut to get an IWebHostBuilder. Behind the scenes, this method will do a few things for us, saving a few lines of code. As per the source documentation, it will:

  1. Use Kestrel as the web server
  2. Set the ContentRootPath
  3. Load configuration from the default locations such as appsettings.json, user secrets and the environment variables. Note that configuration used to be defined inside the constructor of the Startup.cs file
  4. Configure the ILoggerFactory to log to console and debug. This also used to be inside Startup.cs, often defined in the Configure method
  5. Enables IISIntegration
  6. Add the developer exception page when in the development environment

If you want to look at the full source for the WebHost class, which lives inside the MetaPackages repository, you can take a look at the code up on GitHub.

Simplified Startup.cs

Startup.cs has also been simplified, mostly as a result of moving the logging and configuration code into the WebHost builder.

Previously a default ASP.NET Core 1.1 project would look like this:

Now it looks like this:

The constructor no longer takes an injected IHostingEnvironment, instead taking the IConfiguration dependency. With configuration sources already defined the constructor just sets the Configuration properly so we can access it when needed.

ConfigureServices is unchanged, but Configure is slightly shorter since it no longer has to define the logging.


There you have it. A quick tour of the basic structural changes you can expect when you start working with ASP.NET Core 2.0. I’m pretty happy with the changes. My initial concerns around the AspNetCore.All package are hopefully unfounded and the shifting of logging and configuration into extensions on IWebHostBuilder makes sense. The startup file is nice and clean by default and pretty much all code we add will be specifically to include services we require and to setup our pipeline.

ASP.NET Core Anatomy (Part 4) – Invoking the MVC Middleware Dissecting and understanding the internals of ASP.NET Core

In the first three parts of this series I looked at what happens when we call AddMvcCore, AddMvc and UseMvc as part of the application startup. Once the MVC services and middleware have been registered in our ASP.NET Core application, MVC is now available to handle HTTP requests.

In this post I want to cover the initial steps that happen when a request flows into the MVC middleware. This is quite a complex area to split up and write about. I’ve broken it out into what I consider a reasonable flow through the code, ignoring for now certain branches of behaviour and details to constrain this post to something that I hope is digestible in one sitting. Where I gloss over deeper implementation details I will highlight these cases and will write about those elements in future posts.

As with my earlier blog posts, I’ve used the original project.json version (1.1.2) of the MVC source code, since I’ve still not found a reliable way to debug through the source of MVC while including source from the other components such as Routing.

So, let’s begin and look at how MVC matches a request through to an available route and finally to an action which can handle that request. As a quick refresher, the middleware pipeline, configured in Startup.cs file of an ASP.NET Core application defines the flow of the request handling. Each middleware component will be invoked in order, until one of the middleware components determines it can provide a suitable response.

I’m using the MvcSandbox, included with the MVC solution to work with MVC while I write this series. The Configure method looks like this:

Assuming none of the earlier middleware handles the request, we reach the MVC pipeline and middleware, included in our pipeline with the UseMvc call. Once the request reaches the MVC pipeline, the middleware we hit is the RouterMiddleware. Its Invoke method looks like this:

The first thing that Invoke does is construct a new RouteContext passing in the current HttpContext object into the constructor.

The HttpContext is used to set a parameter on the RouteContext class and then a new RouteData object is instantiated and set.

Back inside Invoke; the injected IRouter, in this case the RouteCollection that was created during the UseMvc setup, is added to a List of IRouter objects on the RouteContext.RouteData object. Something worth highlighting is that the RouteData object uses a late initialisation pattern for its collections, only allocating them if they are called. This pattern shows the consideration that has to be made for performance inside a large framework such as ASP.NET Core.

For example, here is how the Routers property is defined:

The first time this property is accessed, a new List will be allocated and stored in the backing field.

Back in Invoke; RouteAsync is called on the RouteCollection.

The first thing the RouteAsync implementation does on the RouteCollection is to create a RouteDataSnapshot. As the comment indicates, allocating a RouteData object is not something that should happen for every route being processed. To avoid that, the snapshot of the RouteData object is created once and this allows it to be reset for each iteration. This is another example of where the ASP.NET Core team have coded with performance in mind.

The snapshot is achieved by calling PushState on the RouteData class.

The first thing it does is create a List<IRouter>. To be as performant as possible it only allocates a list if the private field containing the RouteData routers has at least one IRouter in it. If so, a new list is created, with the correct size to avoid internal Array.CopyTo calls occurring to resize the Lists underlying array. Essentially this method now has a copied instance of the RouteData’s internal IRouter list.

Next a RouteDataSnapshot object is created. RouteDataSnapshot is defined as a struct. Its constructor signature looks like this:

The RouteCollection calls PushState with null values for all of the parameters. In cases where the PushState method be called with a non-null IRoute parameter then it gets added to the Routers list. Values and DataTokens are processed in the same way. If any are included on the PushState parameters, the appropriate items in the Values and DataTokens properties on RouteData are updated.

Finally, the snapshot is returned to RouteAsync inside the RouteCollection.

Next a for loop is started which will loop up to the Count property value. Count simply exposes the Count of the Routes (List<IRouter>) on the RouteCollection.

Inside the loop it first gets a route using the value of the loop. This calls to

So this simply returns an IRouter from the List for the required index. In the MvcSandbox example, the first IRouter in the list at index 0 is the AttributeRoute.

Inside a Try/Finally block, the RouteAsync method is called on the IRouter (the AttributeRoute). Ultimately what’s happening here is we’re hoping to find a suitable Handler (a RequestDelegate) that matches the route data.

We’ll look into more detail about what happens inside the AttributeRoute.RouteAsync method in a future post as there’s a lot happening there and at the moment we’ve not defined any AttributeRoutes within the MvcSandbox. In our case, because there are no AttributeRoutes defined, the Handler value remains null.

Inside the finally block, when the Handler is null, the Restore method is called on the RouteDataSnapshot. This method will restore the RouteData object back to its state when the snapshot was created. Because the RouteAsync method may have modified RouteData during processing, this ensures that we are back to an initial state for the object.

Within the MvcSandbox example, the second IRouter in the list is the named “default” route which is an instance of Route. This class does not override the RouteAsync method on the base, so in this case, the code inside the RouteBase abstract class is called.

First it calls a private method, EnsureMatcher which looks like this:

This method will instantiate a new TemplateMatcher, passing in two parameters. Again, this appears to be an allocation optimisation to ensure that this object is only created at the point it is needed, after the properties which are passed to its constructor are available.

In case you’re wondering where those properties are set; that happens inside the constructor of the RouteBase class. This constructor was called when the default Route was first created as a result of calling MapRoute inside the UseMvc extension, called by the Configure method of the MvcSandbox Startup class.

Inside the RouteBase.RouteAsync method, the next call is to EnsureLoggers which looks like this:

This method gets the ILoggerFactory instance from the RequestServices and uses it to setup two ILoggers. The first is for the RouteBase class itself and the second will be used by the RouteConstraintMatcher.

Next it stores a local variable holding the path of the request being made, retrieved from the HttpContext.

Next, TryMatch on the TemplateMatcher is called, passing in the path of the request, along with any route data values. We’ll dive into the internals of TemplateMatcher in another post. For now, assume that TryMatch returns true, which is the case in our example. In cases where a match is not made a TaskCache.CompletedTask is returned. This is just a Task that is already set as completed.

Next, if there are any DataTokens (RouteValueDictionary objects) set, then the values on the context.RouteData.DataTokens are updated as necessary. As the comment suggests, this is only done if there are actually any values to be updated. The DataTokens property on RouteData is only created when its first called (lazily instantiated). Therefore, calling it when there are no values to update would risk allocating a new RouteValueDictionary when it may not be needed.

In our demo using MvcSandbox, there are no DataTokens so MergeValues is not called. However, for completeness, here is its code:

When called it loops any values in the RouteValueDictionary from the DataTokens parameter on the RouteBase class and updates the destination value for the matching key on the context.RouteData.DataTokens property.

Next, back in RouteAsync, RouteConstraintMatcher.Match is called. This static method loops over any IRouteContaints that are passed in and determines if they have all been met. Route constraints allow routes to be configured with additional matching rules. For example, a route parameter can be constrained to only match for integer values. In our case there are no constraints on our demo route so it returns true. We’ll look at a URL with constraints in another post.

A logger entry is made via an extension method on ILogger called MatchedRoute. This is an interesting pattern that makes it simple to reuse more complex formatting of logging messages for specific needs.

This class looks like this:

An action delegate is defined when the TreeRouterLoggerExtensions class is first constructed that defines how the logged message will be formatted.

When the MatchedRoute extension method is called, the route name and template string are passed as parameters. They are then passed to the _matchedRoute Action. This action creates a debug level log entry using the provided parameters. When debugging inside visual studio you will see this appear in the Output window.

Back inside RouteAsync; the OnRouteMatched method is called. This is defined as an abstract method on the RouteBase, so the implementation comes from the inheriting class. In our case it’s the Route class. Its OnRouteMatched override looks like this:

Its IRouter field called _target is added to the context.RouteData.Routers list. In this case it’s an MvcRouteHandler which is the default handler for MVC.

The RouteAsync method is then called on the MvcRouteHandler. The details of what this does are quite involved so I’ll save those to be the topic of a future post. In summary, MvcRouteHandler.RouteAsync will try to establish a suitable action method that should handle the request. One thing that is important to know for the context of this blog post is that when an action is found, the Handler property on the RouteContext is defined via a lambda expression. We can dive into that code another time but to summarise, a RequestDelegate is a function that accepts a HttpContext and can process a request.

Back inside the Invoke method on the RouterMiddleware, we might have a handler that MVC has determined can handle the requested route. If not then _logger.RequestDidNotMatchRoutes() is called. This is another example of the logger extension style we explored earlier. It will add a debug message, indicating that the route was not matched. In that case, the next RequestDelegate in the ASP.NET middleware pipeline is invoked because MVC has determined it cannot handle the request.

In common configurations for client web/api applications you will not have any middleware defined after UseMvc. In that case as we have reached the end of the pipeline a default 404 not found HTTP status code response is returned by ASP.NET Core.

In cases where we have a handler that can handle the route of the request, we flow through into the Invoke methods else block.

Here a new RoutingFeature is instantiated and added to the Features collection on the HttpContext. Briefly; features are a concept of ASP.NET Core that allows the server to define features that the request supports. This includes flowing of data through the lifetime of a request. Middleware such as the RouterMiddleware can add/modify the features collection and can use this as a mechanism to pass data through the request. In our case, the RouteData from the RouteContext is added as part of the IRoutingFeature definition, so that it can be used by other middleware and request handlers.

The method then invokes the Handler RequestDelegate which will ultimately process the request via the appropriate MVC action. At this point I’ll wrap up this post. What happens next, along with some of the other items I skipped over will form the next parts in this series.


We’ve seen how MVC is actually invoked as part of the middleware pipeline. Upon invocation, the MVC RouterMiddleware determines if MVC knows how to handle the incoming request path and values. If MVC has an action available that is intended to handle the route and route data on the request, then this handler is used to process the request and provide the response.

Other Posts In This Series