Docker for .NET Developers Header

Docker for .NET Developers (Part 5) Exploring ASP.NET Runtime Docker Images

So far in previous posts we’ve been looking at basic demo dockerfiles which use the aspnetcore-build base image. This is okay for testing but does present some issues for actual deployment.

One disadvantage of the build image is its size. Since it contains all of the elements needed to build .NET Core applications it is fairly bloated and not something we would want to be using as a unit of deployment. It contains things like the full .NET Core SDK (which itself includes MSBuild), Node.js, Grunt, Gulp and a package cache for the pre-restored .NET packages. In all, this accounts for an image of around 1.2GB in size. You have to consider the network traffic that pushing around such large Docker images will introduce. If you use an external container registry (we’ll talk about those in a later post) such as Docker Hub, you will have to ship up the full size of the large SDK based image each time something changes.

Dissecting the aspnetcore-build Image

While it’s not really necessary to know the intricate details of the composition of the aspnetcore-build image, I thought it would be interesting to look a little at how it’s put together. As I’ve described previously, Docker images are layered. Each layer generally adds one thing or a set of related things into an image. Layers are immutable but you can base off of the previous layers and add in your layer on top. This is how you get your application into an image.

The ASP.NET Core build image is built up from a number of layers.

Layer 1

Starting from the bottom there is an official Docker image called scratch which is an empty base image.

Layer 2

The next layer is the Debian Linux OS. The .NET Core images are based on Debian 8 which is also known as Jessie. The image is named debian and also tagged with jessie. You can find the source files here.

Its dockerfile is pretty basic.

It starts with the scratch base image and then uses the ADD statement to bring in the tarball containing the debian root file system. One important thing to highlight here is the use of ADD and not COPY. Previously in my samples we used COPY in our dockerfile to copy in contents from the source directory into a destination directory inside the image. ADD is similar but in this case it does one important thing, it will decompress known tar archives. Since the rootfs.tar.xz is a known tar type, its contents are uncompressed into the specified directory, extracting all of the core Debian file system. I downloaded this file and it’s 117Mb in size.

The final line CMD [“bash”] line provides a default command that will run when the container first executes. In this case it runs the bash command. CMD is different from RUN in that it does not execute at build time, only at runtime.

Layer 3

The next layer is buildpack-deps:jessie-curl – Source files are here.

On top of the base image this RUNs three commands. You’ll notice each command is joined with an &&. Each RUN line in a dockerfile will result in a new intermediate image during build. To combat this in cases where we are doing related work, the commands can be strung together under a single RUN statement. This particular set of commands is a pretty common pattern and uses apt-get, a command line tool for working with application packages in Debian.

This structure it follows is listed in the Docker best practices as a way to ensure the latest packages are retrieved. apt-get update simply updates the package lists for new package and available upgrades to existing packages. This technique is known as “cache busting”. It then installs 3 packages using apt-get install.

I had to Google a bit but ca-certificates installs common certificate authorities based on those that ship with Mozilla. These allow SSL applications to verify the authenticity of SSL connections. It then installs the package with curl, a command line tool for transferring data via the URL syntax. Finally wget is a network utility used to retrieve files from the web using HTTP(S) and FTP.

The backslashes is another common convention in production dockerfiles. The backslash is a line continuation character that allows a single line to be split over multiple lines. It’s used to improve readability and the pattern here puts each new package onto a new line so it’s easier to parse the individual packages that will end up being installed. The apt-get command allows multiple packages to be specified with a space between packages.

The final command removes anything in the /var/lib/apt/lists/ directory. This is where the updated package lists that were pulled down using apt-get update are stored. This is another good example of best practice, ensuring that no files remain in the image that are not needed at runtime helps keep the image size down.

Layer 4

The next layer is buildpack-deps:jessie-scm – Source files are also found here.

This layer uses a similar pattern to the layer before it to install some packages via apt-get. Most of these are packages for the common distributed version control applications such as git. openssh-client installs a secure shell (SSH) client, for secure access to remote machines and the procps package seems to be some file system utilities.

Layer 5

The next layer is the microsoft/dotnet layer which will include the .NET Core SDK bits. The exact image will depend on which tag you choose since there are many tagged versions for the different SDK versions. They only really differ in that they install the correct version for your requirements. I’ll look at the 1.1.2-sdk tagged image. You can find the source here.

This dockerfile has a few comments and those explain the high level steps. I won’t dive into everything as it’s reasonably clear what’s happening. First the .NET CLI dependencies are installed via apt-get. Again this uses the pattern we’ve seen earlier.

Next the .NET Core SDK is downloaded in a tar.gz format. This is extracted and the tar file then removed. Finally it uses a Linux link command to create a soft link between the directory /usr/share/dotnet/dotnet and /usr/bin/dotnet.

The final section populates the local Nuget package cache by creating a new dotnet project and then removing it and any scratch files which aren’t needed.

Layer 6

The final layer before you start adding your own application is the microsoft/aspnetcore-build image layer. Again, there are variances based on the different SDK versions. The latest 1.1.2 image source can be found here.

First it sets some default environment variables. You’ll see for example it sets ENV ASPNETCORE_URLS http://+:80 so unless you override the values using the WebHostBuilder.UseUrls extension your ASP.NET Core application will run under port 80 inside the container.

The next two steps do some funky node setup which I won’t dive into.

Next it warms up the NuGet package cache. This time it uses a packagescache.csproj which if you take a look simply includes package references to all of the main ASP.NET related packages. It then calls dotnet restore which will download the packages into the package cache. It cleans up the warmup folder after this.

Finally it sets the working directory to the root path so that the image is clean to start building on in the next layer which will include your application.

Runtime Images

Given the size of the build images and the fact that there’s no need to include the files used to build you application when you deploy it, it is much better practice to try and reduce the contents of your final image to make it as small as possible. It’s also important to optimise it for rapid start-up times. That’s exactly what the aspnetcore image is for. This image only contains the minimal .NET core runtime and so results in a much smaller base image size of 316MB. It’s about one quarter of the size of the build image! This means that it doesn’t include the SDK so cannot issue commands such as dotnet build and dotnet restore. It can only bootstrap compiled .NET core assemblies.

Dissecting the microsoft/aspnetcore image

As we did with the build image, we’ll take a look at the differences in the runtime image.

Layers 1 and 2

The first two layers than make up the final aspnetcore image are the same as with the build image. After the base debian:jessie layer though things differ.

Layer 3

This layer is named microsoft/dotnet and is tagged for the different runtime versions. I’ll look at the 1.1-runtime-deps tagged image which can be found here.

The docker file for this layer is:

This installs just the certificate authorities since we no longer get those from the jessie:curl image which is not used in the prior layers. It then also installs the common .NET Core dependencies.

Layer 4

This layer is named microsoft/dotnet and tagged 1.1.2-runtime which can be found here.

This image installs curl and then uses that to download the dotnet runtime binaries. These are extracted and the tar file removed.

Layer 5

The final layer before your application files, this layer is named microsoft/aspnetcore and tagged with 1.1.2 for the latest 1.1.x version. It can be found here.

Starting with the dotnet runtime image this sets the URL environment variable and populates the Nuget package cache as we saw with the build image. As explained in the documentation it also includes a set of native images for all of the ASP.NET Core libraries. These are intended to speed up the start up of the container since they are native images and don’t need to be JITed.

Using the Runtime Image

The intended workflow for .NET Core based ASP.NET Docker images is to create a final image that contains your pre-built files, and specifically only the files explicitly required by the application at runtime. This will generally be the dlls for your application and any dependencies.

There are a couple of strategies to achieve these smaller images. For this blog post I’m going to concentrate on a manual process we can follow locally to create a runtime-only image with our built application. It’s very likely that this is not how you’ll end up producing these images for real projects and real deployment scenarios, but I think it’s useful to see this approach first. In later blog posts we’ll expand on this and explore a couple of strategies to use Docker containers to build our code.

I’ve included a simple demo application that you can use to follow along with this post. It contains a single ASP.NET Core API project and includes a dockerfile which will define an image based on the lightweight aspnetcore image. If you want to follow along you can get the code from GitHub. Let’s look at the contents of the dockerfile.

Much of this looks very similar to the dockerfiles we’ve looked atin my previous posts, but with some key differences. The main one is that this dockerfile defines an image based on the aspnetcore image and not the larger aspnetcore-build image.

You’ll then notice that this dockerfile expects to copy in files from a publish folder. In order for this file to work, we will first need to publish our application to that location. To publish the solution I’m going to use the command line to run dotnet restore and then use the following command:

dotnet publish -c Release -o ../../publish

The output from running this command looks like this:

Microsoft (R) Build Engine version 15.3.117.23532
Copyright (C) Microsoft Corporation. All rights reserved.

DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\src\DockerDotNetDevsSample3\bin\Release\netcoreapp1.1\DockerDotNetDevsSample3.dll
DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\publish\

This command uses the .NET SDK to trigger a build of the application and then publishes the required files into the publish folder. In my case this produces the publish output and copies it to a folder named publish in the root of my solution, the same location as my dockerfile. I do this by passing in the path for the published output using -o. We also set it to publish in release mode using the -c switch to set the configuration. We’ll pretend we’d use this image for a deployment somewhere to production, so release makes sense.

Now that we have some files in our publish folder, the main one being the dll for our assembly we will be able to use those files inside our image.

Back to the dockerfile, after copying all of the published files into the container you’ll notice that we no longer need to run the dotnet restore and dotnet build commands. In fact, trying to do so would fail since the base image does not include the SDK, these commands would not be known. We already have our restored and built files which we copied into the image.

The final difference you will see is that the entrypoint for this image is a bit different. In our earlier examples we used dotnet run in the working directory containing our csproj file. Again this relied on the SDK which we don’t have. This dockerfile uses dotnet.exe directly against the DockerDotNetDevsSample3.dll. dotnet.exe will bootstrap and fire into the main method of our application.

Let’s build an image from this dockerfile and take a look at what happens.

docker build -t dockerdemo3 .

The output looks like this:

Sending build context to Docker daemon 33.14 MB
Step 1/4 : FROM microsoft/aspnetcore:1.1
---> 3b1cb606ea82
Step 2/4 : WORKDIR /app
---> c20f4b67da95
Removing intermediate container 2a2cf55d8c10
Step 3/4 : COPY ./publish .
---> 23f83ca25308
Removing intermediate container cdf2a0a1c6c6
Step 4/4 : ENTRYPOINT dotnet DockerDotNetDevsSample3.dll
---> Running in 1783718c0ea2
---> 989d5b6eae63
Removing intermediate container 1783718c0ea2
Successfully built 989d5b6eae63
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

As you can see, the docker build is very quick this time since it is not running the restore and .NET build steps. It’s grabbing the pre-built files and setting up the entry point.

I can now run this image using

docker run -d -p 8080:80 dockerdemo3

Navigating to http://localhost:8080/api/values we should see the data from the API.

Summary

In this post we’ve looked in some detail at the layers that make up the aspnetcore-build image and compared them to the layers in the aspnetcore image which just includes the .NET Core runtime. We’ve then seen how we can generate a small runtime based image for our own application which will be much smaller and therefore better to store in a remote registry and quicker to pull down. In future posts we’ll look a other methods that allow us to use Docker for the build and publish steps within a build system, as well as looking at some other things we can do to ensure we minimise the file size of our layers of the image.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – This post
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry


Have you enjoyed this post and found it useful? If so, please consider supporting me:

Buy me a coffeeBuy me a coffee Donate with PayPal

Steve Gordon

Steve Gordon is a Pluralsight author, 6x Microsoft MVP, and a .NET engineer at Elastic where he maintains the .NET APM agent and related libraries. Steve is passionate about community and all things .NET related, having worked with ASP.NET for over 21 years. Steve enjoys sharing his knowledge through his blog, in videos and by presenting talks at user groups and conferences. Steve is excited to participate in the active .NET community and founded .NET South East, a .NET Meetup group based in Brighton. He enjoys contributing to and maintaining OSS projects. You can find Steve on most social media platforms as @stevejgordon

3 thoughts to “Docker for .NET Developers (Part 5) Exploring ASP.NET Runtime Docker Images

    1. Hi Pau,
      Thanks. Yep, I’ll be covering multi-stage builds soon. I’ve started work on some posts about our deployment flow and I’ll be introducing multi-stage during those.
      Cheers, Steve

  1. Again, very nice post.
    It should be mandatory to know the internals of a technology before using it.

Leave a Reply

Your email address will not be published. Required fields are marked *