Docker for .NET Developers (Part 5) Exploring ASP.NET Runtime Docker Images

So far in previous posts we’ve been looking at basic demo dockerfiles which use the aspnetcore-build base image. This is okay for testing but does present some issues for actual deployment.

One disadvantage of the build image is its size. Since it contains all of the elements needed to build .NET Core applications it is fairly bloated and not something we would want to be using as a unit of deployment. It contains things like the full .NET Core SDK (which itself includes MSBuild), Node.js, Grunt, Gulp and a package cache for the pre-restored .NET packages. In all, this accounts for an image of around 1.2GB in size. You have to consider the network traffic that pushing around such large Docker images will introduce. If you use an external container registry (we’ll talk about those in a later post) such as Docker Hub, you will have to ship up the full size of the large SDK based image each time something changes.

Dissecting the aspnetcore-build Image

While it’s not really necessary to know the intricate details of the composition of the aspnetcore-build image, I thought it would be interesting to look a little at how it’s put together. As I’ve described previously, Docker images are layered. Each layer generally adds one thing or a set of related things into an image. Layers are immutable but you can base off of the previous layers and add in your layer on top. This is how you get your application into an image.

The ASP.NET Core build image is built up from a number of layers.

Layer 1

Starting from the bottom there is an official Docker image called scratch which is an empty base image.

Layer 2

The next layer is the Debian Linux OS. The .NET Core images are based on Debian 8 which is also known as Jessie. The image is named debian and also tagged with jessie. You can find the source files here.

Its dockerfile is pretty basic.

It starts with the scratch base image and then uses the ADD statement to bring in the tarball containing the debian root file system. One important thing to highlight here is the use of ADD and not COPY. Previously in my samples we used COPY in our dockerfile to copy in contents from the source directory into a destination directory inside the image. ADD is similar but in this case it does one important thing, it will decompress known tar archives. Since the rootfs.tar.xz is a known tar type, its contents are uncompressed into the specified directory, extracting all of the core Debian file system. I downloaded this file and it’s 117Mb in size.

The final line CMD [“bash”] line provides a default command that will run when the container first executes. In this case it runs the bash command. CMD is different from RUN in that it does not execute at build time, only at runtime.

Layer 3

The next layer is buildpack-deps:jessie-curl – Source files are here.

On top of the base image this RUNs three commands. You’ll notice each command is joined with an &&. Each RUN line in a dockerfile will result in a new intermediate image during build. To combat this in cases where we are doing related work, the commands can be strung together under a single RUN statement. This particular set of commands is a pretty common pattern and uses apt-get, a command line tool for working with application packages in Debian.

This structure it follows is listed in the Docker best practices as a way to ensure the latest packages are retrieved. apt-get update simply updates the package lists for new package and available upgrades to existing packages. This technique is known as “cache busting”. It then installs 3 packages using apt-get install.

I had to Google a bit but ca-certificates installs common certificate authorities based on those that ship with Mozilla. These allow SSL applications to verify the authenticity of SSL connections. It then installs the package with curl, a command line tool for transferring data via the URL syntax. Finally wget is a network utility used to retrieve files from the web using HTTP(S) and FTP.

The backslashes is another common convention in production dockerfiles. The backslash is a line continuation character that allows a single line to be split over multiple lines. It’s used to improve readability and the pattern here puts each new package onto a new line so it’s easier to parse the individual packages that will end up being installed. The apt-get command allows multiple packages to be specified with a space between packages.

The final command removes anything in the /var/lib/apt/lists/ directory. This is where the updated package lists that were pulled down using apt-get update are stored. This is another good example of best practice, ensuring that no files remain in the image that are not needed at runtime helps keep the image size down.

Layer 4

The next layer is buildpack-deps:jessie-scm – Source files are also found here.

This layer uses a similar pattern to the layer before it to install some packages via apt-get. Most of these are packages for the common distributed version control applications such as git. openssh-client installs a secure shell (SSH) client, for secure access to remote machines and the procps package seems to be some file system utilities.

Layer 5

The next layer is the microsoft/dotnet layer which will include the .NET Core SDK bits. The exact image will depend on which tag you choose since there are many tagged versions for the different SDK versions. They only really differ in that they install the correct version for your requirements. I’ll look at the 1.1.2-sdk tagged image. You can find the source here.

This dockerfile has a few comments and those explain the high level steps. I won’t dive into everything as it’s reasonably clear what’s happening. First the .NET CLI dependencies are installed via apt-get. Again this uses the pattern we’ve seen earlier.

Next the .NET Core SDK is downloaded in a tar.gz format. This is extracted and the tar file then removed. Finally it uses a Linux link command to create a soft link between the directory /usr/share/dotnet/dotnet and /usr/bin/dotnet.

The final section populates the local Nuget package cache by creating a new dotnet project and then removing it and any scratch files which aren’t needed.

Layer 6

The final layer before you start adding your own application is the microsoft/aspnetcore-build image layer. Again, there are variances based on the different SDK versions. The latest 1.1.2 image source can be found here.

First it sets some default environment variables. You’ll see for example it sets ENV ASPNETCORE_URLS http://+:80 so unless you override the values using the WebHostBuilder.UseUrls extension your ASP.NET Core application will run under port 80 inside the container.

The next two steps do some funky node setup which I won’t dive into.

Next it warms up the NuGet package cache. This time it uses a packagescache.csproj which if you take a look simply includes package references to all of the main ASP.NET related packages. It then calls dotnet restore which will download the packages into the package cache. It cleans up the warmup folder after this.

Finally it sets the working directory to the root path so that the image is clean to start building on in the next layer which will include your application.

Runtime Images

Given the size of the build images and the fact that there’s no need to include the files used to build you application when you deploy it, it is much better practice to try and reduce the contents of your final image to make it as small as possible. It’s also important to optimise it for rapid start-up times. That’s exactly what the aspnetcore image is for. This image only contains the minimal .NET core runtime and so results in a much smaller base image size of 316MB. It’s about one quarter of the size of the build image! This means that it doesn’t include the SDK so cannot issue commands such as dotnet build and dotnet restore. It can only bootstrap compiled .NET core assemblies.

Dissecting the microsoft/aspnetcore image

As we did with the build image, we’ll take a look at the differences in the runtime image.

Layers 1 and 2

The first two layers than make up the final aspnetcore image are the same as with the build image. After the base debian:jessie layer though things differ.

Layer 3

This layer is named microsoft/dotnet and is tagged for the different runtime versions. I’ll look at the 1.1-runtime-deps tagged image which can be found here.

The docker file for this layer is:

This installs just the certificate authorities since we no longer get those from the jessie:curl image which is not used in the prior layers. It then also installs the common .NET Core dependencies.

Layer 4

This layer is named microsoft/dotnet and tagged 1.1.2-runtime which can be found here.

This image installs curl and then uses that to download the dotnet runtime binaries. These are extracted and the tar file removed.

Layer 5

The final layer before your application files, this layer is named microsoft/aspnetcore and tagged with 1.1.2 for the latest 1.1.x version. It can be found here.

Starting with the dotnet runtime image this sets the URL environment variable and populates the Nuget package cache as we saw with the build image. As explained in the documentation it also includes a set of native images for all of the ASP.NET Core libraries. These are intended to speed up the start up of the container since they are native images and don’t need to be JITed.

Using the Runtime Image

The intended workflow for .NET Core based ASP.NET Docker images is to create a final image that contains your pre-built files, and specifically only the files explicitly required by the application at runtime. This will generally be the dlls for your application and any dependencies.

There are a couple of strategies to achieve these smaller images. For this blog post I’m going to concentrate on a manual process we can follow locally to create a runtime-only image with our built application. It’s very likely that this is not how you’ll end up producing these images for real projects and real deployment scenarios, but I think it’s useful to see this approach first. In later blog posts we’ll expand on this and explore a couple of strategies to use Docker containers to build our code.

I’ve included a simple demo application that you can use to follow along with this post. It contains a single ASP.NET Core API project and includes a dockerfile which will define an image based on the lightweight aspnetcore image. If you want to follow along you can get the code from GitHub. Let’s look at the contents of the dockerfile.

Much of this looks very similar to the dockerfiles we’ve looked atin my previous posts, but with some key differences. The main one is that this dockerfile defines an image based on the aspnetcore image and not the larger aspnetcore-build image.

You’ll then notice that this dockerfile expects to copy in files from a publish folder. In order for this file to work, we will first need to publish our application to that location. To publish the solution I’m going to use the command line to run dotnet restore and then use the following command:

dotnet publish -c Release -o ../../publish

The output from running this command looks like this:

Microsoft (R) Build Engine version 15.3.117.23532
Copyright (C) Microsoft Corporation. All rights reserved.

DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\src\DockerDotNetDevsSample3\bin\Release\netcoreapp1.1\DockerDotNetDevsSample3.dll
DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\publish\

This command uses the .NET SDK to trigger a build of the application and then publishes the required files into the publish folder. In my case this produces the publish output and copies it to a folder named publish in the root of my solution, the same location as my dockerfile. I do this by passing in the path for the published output using -o. We also set it to publish in release mode using the -c switch to set the configuration. We’ll pretend we’d use this image for a deployment somewhere to production, so release makes sense.

Now that we have some files in our publish folder, the main one being the dll for our assembly we will be able to use those files inside our image.

Back to the dockerfile, after copying all of the published files into the container you’ll notice that we no longer need to run the dotnet restore and dotnet build commands. In fact, trying to do so would fail since the base image does not include the SDK, these commands would not be known. We already have our restored and built files which we copied into the image.

The final difference you will see is that the entrypoint for this image is a bit different. In our earlier examples we used dotnet run in the working directory containing our csproj file. Again this relied on the SDK which we don’t have. This dockerfile uses dotnet.exe directly against the DockerDotNetDevsSample3.dll. dotnet.exe will bootstrap and fire into the main method of our application.

Let’s build an image from this dockerfile and take a look at what happens.

docker build -t dockerdemo3 .

The output looks like this:

Sending build context to Docker daemon 33.14 MB
Step 1/4 : FROM microsoft/aspnetcore:1.1
---> 3b1cb606ea82
Step 2/4 : WORKDIR /app
---> c20f4b67da95
Removing intermediate container 2a2cf55d8c10
Step 3/4 : COPY ./publish .
---> 23f83ca25308
Removing intermediate container cdf2a0a1c6c6
Step 4/4 : ENTRYPOINT dotnet DockerDotNetDevsSample3.dll
---> Running in 1783718c0ea2
---> 989d5b6eae63
Removing intermediate container 1783718c0ea2
Successfully built 989d5b6eae63
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

As you can see, the docker build is very quick this time since it is not running the restore and .NET build steps. It’s grabbing the pre-built files and setting up the entry point.

I can now run this image using

docker run -d -p 8080:80 dockerdemo3

Navigating to http://localhost:8080/api/values we should see the data from the API.

Summary

In this post we’ve looked in some detail at the layers that make up the aspnetcore-build image and compared them to the layers in the aspnetcore image which just includes the .NET Core runtime. We’ve then seen how we can generate a small runtime based image for our own application which will be much smaller and therefore better to store in a remote registry and quicker to pull down. In future posts we’ll look a other methods that allow us to use Docker for the build and publish steps within a build system, as well as looking at some other things we can do to ensure we minimise the file size of our layers of the image.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – This post

Docker for .NET Developers (Part 4) Working with docker-compose and multiple ASP.NET Core microservices

In Part 3 we looked at one of the motivations behind using Docker with ASP.NET Core to enable simpler developer processes. In this post we’ll them look at using docker-compose files to define and run multi-container systems. I will take a look at how we can build a sample application, containing two services that are spun up at the same time with a single command.

Working with docker-compose

This will be a simplified example of what we are doing with our front end teams on our project. To keep it easy to work with, I’ll include everything in the same solution and the focus will be mostly on the docker-compose format and how using docker-compose can simplify starting and managing multi-container systems. If you want to try out the code above for yourself I have uploaded the source from this post to GitHub.

This solution contains two ASP.NET Core API projects. One simulating a back end API service and another simulating a front end API service. This is a simplified view on what we have in our real system. Our front end developers need to call to the front end API service in order to gather some data to expose on the UI. In this example we can pretend that the back end API is providing some data to the front end API, which it will enrich and reformat in order to provide final output.

In our real system, we have each service as a separate repository and Visual Studio solution. We have a specialised query API which interacts with ElasticSearch and provides this data for consumption by multiple services. One of those services is our report API which translates the returned data into a model suitable for use by the front end. In our first iteration of the development flow we have our front end developers pulling the latest source for each of the APIs systems but currently we are reworking that to produce a more optimal flow. However, it’s still very useful to know how to use docker-compose to start up and maintain multiple, related Docker containers.

If you take a look in each of the projects you’ll see I’ve included a dockerfile in the root. These dockerfiles are very much like the one I demonstrated in detail in part 2 of this series. They use the aspnetcore-build Docker image to build and then run our application code. Again, this is not exactly how you will want to end up building your images for deployment in the real-world, but it works for this example.

The docker-compose file structure

If you take a look in the root of the solution, I’ve included a docker-compose.yml file.

Docker compose as described in the Docker documentation is a command line tool which allows us to define and run multi-container Docker applications. It’s a yaml based file where you can define one or more services. Each service will be started as a container and using this file you can provide a practical way for someone to start up a set of related containers, with defined dependencies between each component.

If we take a look at the docker-compose file in my sample it looks like this:

First we specify a version for the docker-compose file. I’m using a fairly recent 3.2 version. This simply defines the features available in the docker-compose file.

It then has a services node. It’s in here that we will create a definition for each service in our system. The first one I define is a backend-service. The build element allows me to define the context. This is the path to the directory where the dockerfile resides for that service. In my case I navigate to the path of the back end project, which is where I located the dockerfile for that API.

In this docker-compose file I provide an explicit name for this container that will be used when starting it up. That’s all there is for the back end service.

The next service I define is the front end service. This looks similar to the back end one, but includes some extra properties. The first is the ports option. This allows me to expose ports from the container out to the host which runs it. As this is the front end API and we expect to be able to call it via the host, we must provide a publicly exposed port on the host machine, otherwise we would not be able to access it. We define this with host port (the port to use on the host) and then the container port (the internal port the application runs on inside the container). These are separated with a colon.

You may be wondering why we didn’t do this on the back end service. The front end service needs to talk to that in order to retrieve data, so how is that working? This is where some magic happens. Docker provides a DNS system used to locate containers and docker compose sets up a Docker network for your defined services. All containers that are part of the docker-compose services definition are reachable by other containers on that network. This means that we don’t have to explicitly expose the back end port for the containers to communicate. As we’re not expecting to allow external systems to call the back end API it’s best to avoid exposing it’s port on the host. I can limit my exposure to only the single front end port that I need to expose.

The next configuration option I define is the environment section. This allows us to set environment variables inside the container that will be started using this docker-compose file. On my front end service I used the standard ASP.NET Core configuration code to define a setting for the path to the back end service. This is required as this will be different on development and production. The ASP.NET Core configuration is layered, and although it will have loaded a value for the back end setting from the appsettings.json file, it can be overridden using an environment variable. This powerful configuration structure compliments Docker very nicely since we can always specify the required environment variables for each running container.

You’ll note I’ve used http://backend-service:80 for the value of the back end endpoint setting. How does that work? Remember that I mentioned that Docker has its own DNS and that docker-compose creates a network for the set of services. This allows me to address the other containers using the name I gave the service. Note that isn’t the container name, it’s the name from the first element of the service. The Docker DNS system will resolve this name between the containers. I use the port that the service runs on inside the container which by default will be port 80 for the ASP.NET Core API.

The final new element I’ve included is the depends_on section. This allows me to define a child/parent dependency for the containers. Here I state that the front end service depends on the back end service. It will be started after the back end is running. On our project we use this to start up a Postgres database as part of our chain of services. Our front end APIs depend on the database being started first in order to startup and seed their data. That’s all we need to do to define our basic docker-compose file.

docker-compose Commands

Now that we have our docker-compose file we can use it to start up our containers. We can do this by dropping into a command line at the path where our docker-compose file is defined. There are a couple of command we can now use. I’ll start with:

docker-compose up -d

This will inspect the docker-compose file in the root of the path where the command is run. It will build and start up a container for each of the defined services. The -d argument specifies that the containers start in detached mode and run in the background. If you choose to leave this off you will see a combined streamed output from the console for each of the containers. If there are no images available for those containers, they will be built automatically.

This single command provides a really handy way to start up multi-container systems. While my example is quite simple, having only two services, our real world front end docker-compose file contains 4 API based services, one Postgres database container and sometimes even an ElasticSearch container. Each of those can be built and run manually using the individual Docker commands against their respective dockerfile, but it’s much nicer for our front end developers to use docker-compose and a single command.

I mentioned there were other commands we can use. While docker-compose up will build images for us if they do not already exist, once images exist, it will keep using those images. If we make changes to the source files and expect new images to be built we can use docker-compose build to explicitly build the new images. This will only build them, and will not start any containers. You can also do a build and up in a single command using docker-compose up --build. This will build new images and then start all of the containers.

I also want to draw attention to the --no-cache option as well. This will force no caching of the image layers to will recreate each layer again. I’ve found this useful when testing a docker-compose file change to ensure all steps are working as expected.

Once containers are running you might wonder how we can stop them. You can use docker-compose stop to stop all of the containers defined in the docker-compose file. To cleanup you can use docker-compose rm to remove the stopped containers. There is also docker-compose down if you would like to stop and then remove the containers using a single command.

Summary

In this blog post I have described a real-world scenario that we encountered and which I hope demonstrates a benefit of using Docker during development. By including Docker in our developer flow we have negated the need for front end developers to use a Windows VM and to manually be responsible for using an IDE such as Visual Studio to build our .NET code. Builds are quicker and there are very few dependencies required to get a new front end developer on-boarded to our project. We have no complex “setup your developer machine” documents containing specific versions of application, registry settings and IIS configuration.

In particular we focused on how this is useful for multiple microservice based architectures where often a single working system may be built from many smaller services. We used docker-compose to define our set of services and using a single command we’re able to start the components on a developer machine. Using this approach we can define and co-ordinate the elements making up the system so that front end developers can run the environment on their devices with ease.

Finally we explored the structure of a simple docker-compose file, looking at how we can define the services, set container environment variables and expose ports.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – This post
Part 5 – Exploring ASP.NET Runtime Docker Images

Docker for .NET Developers (Part 3) Why we started using Docker with ASP.NET Core

In the prior two posts in this series we took a look at the main Docker terminology and looked at creating a basic dockerfile to define a Docker image containing an ASP.NET Core API application. In this post I want to switch it up a bit and spend a little time sharing a specific practical reason that led our team to start using Docker in our development flow. This was our first step in the Docker journey that we now find ourselves on and expands on the summary I introduced in part 1.

Why we started using Docker

Warning! This post has no code examples. Instead what I’ll be describing is a real-world situation that led us to start using Docker for our new project. I want to highlight why Docker was a good choice given our requirements and hopefully you’ll find cases in your work where it may be relevant to consider Docker yourself. I think it’s useful to share this story, while there’s a lot of excitement about Docker and some clear benefits, it’s useful to reflect on a practical advantage it brought us with very little effort. This is simply the start of the journey and I will be expanding on how our use evolved as the series continues. I hope you find this part useful, but don’t worry, I won’t be offended if you want to skip onto the docker-compose examples in part 4! See, I even provided a link!

As I explained in the first post, our new greenfield project at work kicked off in 2016. We needed to build a system to provide data analytics over a very large data set of event based data. I won’t cover the architecture of the back end components at this stage. For this post it’s the front end where I want to demonstrate the first benefit we were able to gain from using Docker.

At work, our front end and back end teams are split and work on their areas of expertise accordingly. On our current platforms, we have an in-house MVC framework which uses a custom templating language for the front end pages. When our front end developers need to perform work on the UI, they must pull down and update the main platform code on their devices in order to spin up the site locally. Given that this is a large platform, the size of the changes they need to pull could be fairly large. In order to work with the site, they must compile the code and run it locally, which, as this platform is built on the full .NET framework, requires Windows and IIS. Therefore, each developer, most of whom use a Mac, must have a Windows VM installed, or access to a Remote VM in order to work with the code.

Over time this flow has evolved to make the process as efficient as possible but it’s still not an ideal solution. With this new project we had the opportunity to try a new more modern approach. Given that the UI for this analytics style application was very interactive, we decided that it made sense to develop a SPA style front end. This had the benefit that it enabled the front end developers to choose a technology that suited them. After a bit of research they landed on Vue.js.

In order to support the front end SPA, we agreed that we would provide a number of REST API services, each providing a specific, bounded functionality. We could have chosen to build one larger, all-encompassing API application, but decided that the smaller APIs made scaling each component much easier. Scaling would be dependent on their own unique load and allow us to better use the resources of the server. It also means we can keep the code separated and hopefully easier to maintain. We can change each component independently and work more rapidly.

We ended up with a design that included 3 web facing API services, each backed by its own database. We have a user API which handles authentication and authorisation for the application, as well as other user management services. We have a report API that enables the creation and running of reports and we have a schedule API used for creating scheduled reports and managing downloads. Finally, we have a back end API which is responsible for querying over ElasticSearch, processing the data and returning the resulting data. This query API is used by both the reporting API and our offline back end scheduled report service.

Here is a diagram showing the main components our of front end services architecture:

Docker microservice architecture

At the time we were creating our architecture, ASP.NET Core was in RC and we took a brave but in the long run, good decision to use the new framework to develop our APIs. Not only were we interested in the performance and framework improvements, but we also knew we could begin to take advantage of the cross platform nature of .NET Core to allow us to look at different hosting options. Having recently started moving our systems into AWS we initially were considering the possibility of using Linux VMs to host the APIs.

As we pursued this path, another benefit of this decision came to light. Now that we were cross platform and planning to target Linux for hosting, we realised we could start to look at Docker and containerisation as part of our workflow. This opened up some new possibilities and after talking through the concept we realised that we could make development for the front end team much easier by providing Docker images to run the API code. The big change this presented is that they would no longer need to run Windows in order to build the UI for the platform and they would no longer need Visual Studio in order to build the source code.

A downside of splitting things into smaller microservices is that in order for a system to function, you often need multiple things running together at the same time. In our case, each of the 4 main API services needed to be running for the front end to be able to fully function. Each of those also needed a database server, we had chosen Postgres, to support the data storage requirements. Spinning up all of these parts needed a little coordination.

When we started looking at Docker we realised that we could solve this problem using docker-compose files. Docker-compose provides a simple way to start multiple containers with a single command. In our case, each service is a separate Visual Studio solution and repository. This allows us to develop, build and deploy each part of the system individually. As long as the public facing API endpoints do not change then we can update the internals of each API with no impact on the other parts of the system. In each solution we include a dockerfile to describe a Docker image that that will run the API service. These dockerfiles initially started very much like the example I showed in part 2 of this series. We have since gone on to optimise the files and I plan to explore that in a future post.

In addition to the dockerfiles we also provide a docker-compose file which, with a single command, can be used to build all of the required images. With a second command we can start all of the containers needed to support the front end. Our first iteration of the front end workflow relied on the front end developers pulling the latest source from each repository. They could then use the docker-compose build command which triggered image creation. Since the build is happening inside the Docker containers, they do not need to run Windows or even have the .NET SDK installed on their Macs. As part of the solution, we also utilised a public Docker image for Postgres as one of the services defined within our docker-compose file. This means that the front end team do not need to install Postgres on their own device either. Other than Docker, there are no dependencies to run the back end services required to develop the front end website. With these steps we had removed a barrier for the front end team and very much simplified our development process.

In the second iteration that we are now working on for the front end workflow we are providing the front end team with a new docker-compose file which pulls images from a private container registry running in AWS ECR. With this change there is no need for the front end developers to ever pull or update the source for the components they need. Instead, the compose file will pull the latest available images from the registry and have them up and running extremely quickly. We’re still investigating and testing how we want to finalise this part of our development workflow so it’s something I’ll share in a future post.

The same process proved really useful for QA and testing as well. Anyone involved with testing the system could pull the components down and run the full system in isolation on their machine. The front end website is also containerised and in this case we even used an ElasticSearch Docker image to allow a full end to end system to be started and tested on a machine, with few external dependencies. The dependencies we could not containerise were things such as access to Amazon AWS SQS and S3 for example.

One by-product of leveraging Docker for the front and back end developer flow is that we can start to eliminate the “it worked on my machine” arguments. By loading everything inside Docker we get to a place, where everyone is working on an identical, repeatable environment. Because our image contains all of the dependencies, we can be sure we have matching versions and configuration every time it is run. We don’t have to ask developers to maintain correct versions of dependencies on their devices either.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – This post
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images

Docker for .NET Developers (Part 2) Taking a look at our first dockerfile and building an image for an ASP.NET Core API service

In the first post in the series I introduced a few concepts and key terms you’ll need to know if you want to work with Docker. In this post I will cover the steps needed to begin working with Docker in your development environment. We’ll look at creating a very basic sample API service and a dockerfile that defines an image which can run the application inside a Linux container.

Getting Docker

Before we can begin using Docker we need to install it. There are a couple of options, if like me, you’re developing on Windows.

Docker for Windows

If you’re running Windows 10 Pro, your best option is to download and install Docker for Windows. You can get it from https://docs.docker.com/docker-for-windows/install/#install-docker-for-windows

Docker for Windows supports both Windows and Linux containers. To follow along, you’ll need to ensure that Docker is running in Linux mode which is what I’ll be using in my initial examples. To switch between the modes you can right click the Docker tasktray icon…

Switch between Linux and Windows with Docker for Windows

When running with Linux contains it will start a Linux VM for you inside Hyper-V. Once Docker for Windows is running you can use Powershell to run Docker commands that will be passed through to the Linux host.

Docker Toolbox

If you don’t have Windows 10 Professional, don’t worry, there is another option. Docker also provide Docker Toolbox which includes VirtualBox, a virtualisation product from Oracle which can be installed freely. You can use this on earlier versions of Windows as well as Windows 10 Home.

You can download and install Docker Toolbox from https://docs.docker.com/toolbox/toolbox_install_windows/

Docker Toolbox will create and load a small Linux VM for you inside VirtualBox which will then become the host for Docker. This does add a layer of complexity as you may need to configure port forwarding from the VirtualBox VM host out into your Windows environment. You end up with another layer to manage, but once you’re up and running it’s fairly easy to work with.

Once you have it installed you can run the Docker Quickstart Terminal shortcut to start the Linux VM and attach to it. Once that loads you can run Docker commands on the VM from the bash shell.

Our First Dockerfile

To demo the process of manually creating a dockerfile I’m going to build up a small sample API application. Inside a new empty directory I have created an ASP.NET Core 1.1 API project. This is just a default API project from the Visual Studio templates. I like to structure my solutions in a similar way to the Microsoft repositories so I do move a few things around. In my root folder I have my solution file. I then have a “src” folder, inside which I include any projects that are part of the solution.

With a basic solution in place, I like to create a “docker” solution folder inside Visual Studio and inside that I create a new text file named dockerfile (without an extension). I locate the dockerfile physically in the root of my solution, alongside the sln file.

Folder structure of our sample docker solution

We can now create our first dockerfile by editing the empty file. I work with it via Visual Studio, but it’s just a plain text file so you can use any editor you like. For now we’re going to create a naïve and quite basic dockerfile to demonstrate some of the main concepts. In later posts I’ll be showing a more optimal way to build and structure our dockerfile(s). The initial dockerfile looks like this:

FROM microsoft/aspnetcore-build:1.1

Docker images are like onions and are layered up from multiple base images. Each new image builds on top of the previous image until a complete image it built, containing all of the components it needs. As a result, every dockerfile you produce will start with a FROM statement, which defines its base image.

In this example I’m using a Microsoft maintained image for aspnetcore called aspnetcore-build. This particular image includes the .NET Core SDK to enable building/publishing the code. I also specify a tag after the colon. All images can be tagged with zero or more tags. This tag specifies that I want the image containing the 1.1 SDK for ASP.NET Core. If I did not include a specific tag, I would get the image tagged with latest, which at the time of writing this is the 1.1.x stream.

These base public images are hosted online at DockerHub. This is a Docker registry that is commonly used for hosting public images. A registry like this can be thought of in similar terms to NuGet for .NET packages. Docker knows to search the DockerHub registry for any images you specify. If you don’t already have a local copy of the image cached locally, it will be pulled from the registry.

WORKDIR /app

The next line sets our working directory inside the container we are building. Any actions we perform will affect that working directory. As each line of the dockerfile is executed, it creates a new intermediate image, building up layers until you have your final image. We’ll look at this in more detail in a future post to see how we can optimise the layering to reduce the number of intermediate images and the time it takes to build a new image.

COPY . .

Next we perform a copy command. When copying you specify the source (on the host) and destination (in the image) of the files to copy. In this case, by using periods we’re copying from the host at the path where the Docker commands are being executed, which will be the root of our solution directory. In the image since we also used a period, we are copying directly into the working directory which in our case is /app. 

RUN dotnet restore
RUN dotnet build

Next we execute two run dotnet commands. The first runs the dotnet restore command which will perform a package restore from Nuget for all dependencies of our solution. Next I run a dotnet build command to produce the default application build.

WORKDIR /app/src/DockerDotNetDevsSample1

Next we switch the working directory to the directory containing the copied in project file.

ENTRYPOINT dotnet run

Finally we define the entry point to the image. This is the instruction to the image on how to start the process that it will run for us. In this case we tell it to execute the dotnet run command which will start up the ASP.NET Core API, hosted on Kestrel, and begin listening. By default the base aspnetcore image will set an environment variable that will tell the webhost to listen port 80 within the container.

Building the image

Now that we have a dockerfile defining our image we can use the Docker commands to create the image for us. On Windows when running Docker for Windows we can run the Docker commands directly from a Powershell window. I opened up Powershell and navigated to the root of our sample solution.

From there I run the build command:

docker build -t sample1 .

The inclusion of the -t option allows me to specify a tag for the image which will make working with it easier later on. The dot (period) at the end of the statement is important and tells Docker where to build from. In this case as I’m in the solution root already and my dockerfile is located there I can use a dot to represent the same location.

Docker will now begin to build my image.

Powershell output of docker build

Why is it downloading so much?

As I touched on already, Docker is based on layers of images. Each dockerfile specifies a FROM image which is its base image. It makes a small immutable change which is then available as the basis of the next layer. The aspnetcore-build image is exactly the same. It is based on a dotnetimage from Microsoft and below that a few other layers until we get to the initial Debian Linux image. When we include the aspnetcore image in our FROM command it will pull down the required layers of images from the DockerHub. Each of these is cached on the host machine so they can be quickly reused when possible. Some of these image layers will be very small as they make incremental changes to their base images. In my example I had explicitly cleared all of the images on my machine so I could show this first time download of the images.

Docker Build Output

Here’s the full build output from my sample docker build:

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker build -t sample1 .
Sending build context to Docker daemon 1.993 MB
Step 1/7 : FROM microsoft/aspnetcore-build:1.1
1.1: Pulling from microsoft/aspnetcore-build
10a267c67f42: Pull complete
fb5937da9414: Pull complete
9021b2326a1e: Pull complete
5df21d865eab: Pull complete
e4db626d1d21: Pull complete
87b3f796757a: Pull complete
629d4f39b75b: Pull complete
21c29d072c6e: Pull complete
39d6d7136f1b: Pull complete
74021b8a9867: Pull complete
Digest: sha256:9251d6953ca2fccfee1968e000c78d90e0ce629821246166b2d353fd884d62bf
Status: Downloaded newer image for microsoft/aspnetcore-build:1.1
---> 3350f0076aca
Step 2/7 : WORKDIR /app
---> 93515c761d80
Removing intermediate container c78aa9397ee7
Step 3/7 : COPY . .
---> 8125a8d08325
Removing intermediate container 6d3db0a39d6a
Step 4/7 : RUN dotnet restore
---> Running in d0d8fa97f402
Restoring packages for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj...
Restoring packages for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj...
Installing System.IO.Pipes 4.0.0.
Installing System.Xml.XPath.XmlDocument 4.0.1.
Installing System.Resources.Writer 4.0.0.
Installing System.Runtime.Serialization.Xml 4.1.1.
Installing System.Diagnostics.TraceSource 4.0.0.
Installing Microsoft.NETCore.Jit 1.0.2.
Installing Microsoft.Build 15.1.548.
Installing Microsoft.Build.Tasks.Core 15.1.548.
Installing Microsoft.Build.Utilities.Core 15.1.548.
Installing Microsoft.Build.Framework 15.1.548.
Installing Microsoft.NETCore.Runtime.CoreCLR 1.0.2.
Installing Microsoft.NETCore.DotNetHostPolicy 1.0.1.
Installing Microsoft.Build.Runtime 15.1.548.
Installing Microsoft.NETCore.App 1.0.0.
Installing NuGet.Frameworks 3.5.0.
Installing Microsoft.Extensions.CommandLineUtils 1.0.1.
Installing Microsoft.VisualStudio.Web.CodeGeneration.Tools 1.0.0.
Restore completed in 3.87 sec for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj.
Installing Microsoft.AspNetCore.Cryptography.Internal 1.1.1.
Installing Microsoft.AspNetCore.DataProtection.Abstractions 1.1.1.
Installing Microsoft.DotNet.PlatformAbstractions 1.1.1.
Installing Microsoft.AspNetCore.Razor 1.1.1.
Installing Microsoft.AspNetCore.DataProtection 1.1.1.
Installing Microsoft.Extensions.DependencyModel 1.1.1.
Installing Microsoft.AspNetCore.ResponseCaching.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Authorization 1.1.1.
Installing Microsoft.AspNetCore.Mvc.Abstractions 1.1.2.
Installing Microsoft.Extensions.Globalization.CultureInfoCache 1.1.1.
Installing Microsoft.Extensions.Localization.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Razor.Runtime 1.1.1.
Installing Microsoft.AspNetCore.WebUtilities 1.0.0.
Installing Microsoft.Extensions.ObjectPool 1.0.0.
Installing Microsoft.Net.Http.Headers 1.1.1.
Installing Microsoft.AspNetCore.Antiforgery 1.1.1.
Installing Microsoft.Extensions.Logging.Debug 1.1.1.
Installing Microsoft.AspNetCore 1.1.1.
Installing Microsoft.ApplicationInsights.AspNetCore 2.0.0.
Installing Microsoft.AspNetCore.Mvc 1.1.2.
Installing Microsoft.AspNetCore.Server.Kestrel 1.1.1.
Installing Microsoft.Extensions.Logging.Console 1.1.1.
Installing Microsoft.Extensions.Configuration.EnvironmentVariables 1.1.1.
Installing Microsoft.Extensions.Configuration.Json 1.1.1.
Installing Microsoft.Extensions.Configuration.FileExtensions 1.1.1.
Installing Microsoft.AspNetCore.Routing 1.1.1.
Installing Microsoft.Extensions.WebEncoders 1.1.1.
Installing Microsoft.AspNetCore.Server.IISIntegration 1.1.1.
Installing Microsoft.AspNetCore.Html.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Hosting 1.1.1.
Installing Microsoft.AspNetCore.JsonPatch 1.1.1.
Installing Microsoft.AspNetCore.Cors 1.1.1.
Installing Microsoft.AspNetCore.Mvc.Core 1.1.2.
Installing Microsoft.AspNetCore.Diagnostics 1.1.1.
Installing Microsoft.Extensions.Options.ConfigurationExtensions 1.1.1.
Installing Microsoft.Extensions.Configuration 1.0.0.
Installing Microsoft.Extensions.DiagnosticAdapter 1.0.0.
Installing Microsoft.ApplicationInsights 2.2.0.
Installing Microsoft.Extensions.Configuration.Json 1.0.0.
Installing Microsoft.AspNetCore.Hosting 1.0.0.
Installing Microsoft.AspNetCore.Mvc.TagHelpers 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Razor 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Localization 1.1.2.
Installing Microsoft.AspNetCore.Mvc.DataAnnotations 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Cors 1.1.2.
Installing Microsoft.AspNetCore.Mvc.Formatters.Json 1.1.2.
Installing Microsoft.AspNetCore.Mvc.ApiExplorer 1.1.2.
Installing Microsoft.AspNetCore.Mvc.ViewFeatures 1.1.2.
Installing Microsoft.Extensions.Configuration 1.1.1.
Installing Microsoft.Extensions.FileProviders.Physical 1.1.0.
Installing Microsoft.Extensions.ObjectPool 1.1.0.
Installing Microsoft.AspNetCore.Routing.Abstractions 1.1.1.
Installing Microsoft.AspNetCore.Http.Extensions 1.1.1.
Installing Microsoft.AspNetCore.Localization 1.1.1.
Installing Microsoft.AspNetCore.HttpOverrides 1.1.1.
Installing Microsoft.AspNetCore.Http 1.1.1.
Installing Microsoft.AspNetCore.WebUtilities 1.1.1.
Installing Microsoft.Extensions.Localization 1.1.1.
Installing Microsoft.AspNetCore.Diagnostics.Abstractions 1.1.1.
Installing Microsoft.Extensions.Configuration.Binder 1.1.1.
Installing Microsoft.Extensions.Configuration.FileExtensions 1.0.0.
Installing Microsoft.Extensions.Configuration.EnvironmentVariables 1.0.0.
Installing Microsoft.Extensions.Options 1.0.0.
Installing Microsoft.Extensions.Logging 1.0.0.
Installing Microsoft.Extensions.DependencyInjection 1.0.0.
Installing Microsoft.AspNetCore.Http 1.0.0.
Installing Microsoft.Extensions.FileSystemGlobbing 1.1.0.
Installing Microsoft.Extensions.FileProviders.Composite 1.1.0.
Installing Microsoft.AspNetCore.Mvc.Razor.Host 1.1.2.
Generating MSBuild file /app/src/DockerDotNetDevsSample1/obj/DockerDotNetDevsSample1.csproj.nuget.g.props.
Writing lock file to disk. Path: /app/src/DockerDotNetDevsSample1/obj/project.assets.json
Restore completed in 5.16 sec for /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj.

NuGet Config files used:
/root/.nuget/NuGet/NuGet.Config

Feeds used:
https://api.nuget.org/v3/index.json

Installed:
86 package(s) to /app/src/DockerDotNetDevsSample1/DockerDotNetDevsSample1.csproj
---> 3ad561c9b58d
Removing intermediate container d0d8fa97f402
Step 5/7 : RUN dotnet build
---> Running in ae5eb32e269f
Microsoft (R) Build Engine version 15.1.1012.6693
Copyright (C) Microsoft Corporation. All rights reserved.

DockerDotNetDevsSample1 -> /app/src/DockerDotNetDevsSample1/bin/Debug/netcoreapp1.1/DockerDotNetDevsSample1.dll

Build succeeded.
0 Warning(s)
0 Error(s)

Time Elapsed 00:00:02.80
---> 87dfa1483f4e
Removing intermediate container ae5eb32e269f
Step 6/7 : WORKDIR /app/src/DockerDotNetDevsSample1
---> de5e09dfdc89
Removing intermediate container 05bf88ae0454
Step 7/7 : ENTRYPOINT dotnet run
---> Running in 5c580412a46a
---> f04465a14c84
Removing intermediate container 5c580412a46a
Successfully built f04465a14c84
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

As you will see, the build process executes each line from our dockerfile in order. At each stage a new intermediate container is built and the change from the next command applied. You can see the output of the package restore occurring for example, and the final build of the dotnet solution.

Running the container

Now that we have an image we can start up one or more containers using that image. A container is a running instance of an image. We can run the image using the following command:

docker run -p 8080:80 sample1

This tells Docker to run the image called sample1. Because we tagged our image with that name it’s easy for us to now start a container instance. Without a tag we would have had to use part of the randomly generated id instead. I also include a -p option which tells Docker that we want to expose a port from the running container through to the host. We define the port on the host that we want to use, and the port on the container we want to pass through. By default no ports are exposed which helps make containers secure.

Here is the output we see when running the above command:

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker run -p 8080:80 sample1
Hosting environment: Production
Content root path: /app/src/DockerDotNetDevsSample1
Now listening on: http://+:80
Application started. Press Ctrl+C to shut down.

Now that the container is running and a port has been mapped onto the host we can call the API. To test this I used Postman to build a request to send to the API, exposed to us on port 8080.

Postman testing of our Docker API service

The above command starts our container from our sample1 image, but as you will have noticed, we were joined to its terminal, so we saw it’s console output. That happens by default since Docker run will attach to the container we are starting. This is fine for testing, but often we are not concerned with spitting out the console to our host.

A more common option is to start containers in detached mode:

docker run -d -p 8080:80 sample1

The -d option tells Docker that we want to start the container in detached mode. This means we won’t see the console output streamed from the container. It’s running, but in the background.

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker run -d -p 8080:80 sample1
c6e3335de246843b4c77ae0f73e61a2db912fc542669601323db22990b029e7a

We get shown the id for the container which we can then use to perform commands against it. For example, should we want to check what’s happening inside the container we can use the logs command to show the latest console messages.

PS E:\Software Development\Projects\DockerDotNetDevsSample1> docker logs c6
Hosting environment: Production
Content root path: /app/src/DockerDotNetDevsSample1
Now listening on: http://+:80
Application started. Press Ctrl+C to shut down.
PS E:\Software Development\Projects\DockerDotNetDevsSample1>

When sending commands we can use a shortened id to target a container. You only need to send the smallest amount of characters needed to uniquely identify the container. As nothing else is running with an id of c6 we can shorten the id significantly.

To stop the container we can use the command “docker stop c6” which will instruct it to gracefully end the process and stop.

Summary

In this post we’ve looked at the options to install and run Docker on a Windows device. We then looked at creating a basic solution and including a dockerfile so that we could define a Docker image intended to run our application. Finally we looked at how we can start a container from the image and some common commands to work with it. In future posts I’ll expand on this further and show how we’re using docker-compose to orchestrate multiple containers during development. We’ll then look at how we can create a CI build pipeline using Jenkins and eventually look at using AWS ECS to host live services running in containers.  

If you want to try out the code above for yourself I have uploaded the source from this post to GitHub.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – This Post
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images

Docker for .NET Developers (Part 1) An introduction to Docker for .NET developers

Two words you will very likely be used to hearing quite often within our community at the moment are “microservices” and “Docker”. Both are topics of great interest and are generating excitement for developers and architects. In this new series of blog posts I want to cover Docker, what it is, why it might be of interest and specifically look at what it means for .NET developers. As a little background; my experience with Docker started last year when we began building a new big data analytics system. The requirement was to gather many millions of events from hundreds of individual source systems into ElasticSearch, then provide a way for users to flexibly report over that data, in real time.

Without going deep into specifics, we developed a system of queues, input processors and multiple back end services which analyse the source data. These services work with the data, aggregating it, shaping it for custom reporting and we provide various API services used by a front end SPA UI written in Vue.js. In essence these are microservices since each one is small and self-contained, providing a specific set of related functionality.

We knew we wanted to use ASP.NET Core for the API elements and very soon after that decision, we realised we could also take advantage of the cross platform nature of .NET Core to support easier development for our front end team on their Mac devices.

Historically the front end developers have had to use a Windows VM to work with our projects written in .NET. They pull the latest platform code to their devices, build it and run it so that they can work on the UI elements. This process has some overhead and has been something that we have wanted to streamline for some time. With this fresh project we were able to think about and implement improvements in the process.

The solution that we came up with was to provide Docker images of the back end services that the front end developers could quickly spin up on their development environments. They can do this without the need to run a Windows VM and the result has been a great productivity gain.

As containers and Docker were working so well for use in development, we also decided to use them as our build and deploy process onto the live environment. We are using AWS ECS (EC2 Container Service) to run the containers in the cloud, providing a scalable solution to host the various components.

This is our first time working with Docker and it has been an interesting learning experience for everyone involved. We still have more to learn and I’m sure more ways we can improve the process even further but what we have now is already providing great gains for us.

What is Docker?

There are a lot of articles and videos available via a quick Google that discuss what Docker is. To try and distil the essence, Docker is a containerisation technology and application platform that lets us package and deploy an application or service as an isolated unit containing all of its dependencies. In over simplified terms it can be thought of as a very lightweight, self contained virtual machine.

Docker containers run on top of a shared OS kernel, but in an isolated way. They are very lightweight which is where they offer an advantage over traditional VMs. You can often make better use of the host device(s) by running more containers and better sharing the underlying resource. They have a lighter footprint, containing only the minimum dependencies that they require and they can share the host resources more effectively.

A Docker image can be as small as a few hundred megabytes and can start in a matter of a few seconds or even fractions of a second. This makes them great for scaling since extra containers can be started very rapidly in response to scaling triggers such as a traffic increase or growing queue. With traditional VM scaling you might have a few minutes wait before the extra capacity comes online, by which time the load peak could have caused some issues already.

Key Concepts

As this is an introduction post I wanted to summarise some of the core components and terms that you will need to know when beginning to work with Docker.

Image

A docker image can be considered a unit of deployment. Images are defined by the Docker files and once built are immutable. To customise an image further you can use it as the base image within your next dockerfile. Typically you store built images in a container registry which them makes them available for people to reference and run.

Container

A container is just a running instance of a Docker image. You start an image using the Docker run command and once started your host will have an instance of that image running.

Dockerfile

A dockerfile is how Docker images and the deployment of an application are described. It’s a basic file and you may only require a few lines to get started with your own image. Docker images are built up in layers. You choose a base image that contains the elements you need, and then copy in your own application on top. Microsoft provide a number of images for working with .NET Core applications. I’ll look into the ones we use in future posts.

The nice thing about using a simple text file to describe the images is that it’s easy to include the dockerfile in your repository under source control. We include various dockerfiles in our solutions that enable slightly different requirements.

Docker Compose

A Docker compose file is a basic way to orchestrate multiple images/containers. It uses the YML format to specify one or more containers that make up a single system, or part of a system. Within this file you specify the images that need to be started, what they depend on, what ports they should start under on the host etc. Using a single command you can build all of the images. With a second single command you can tell Docker to run all of the containers.

We use a docker compose file for our front end developers. We define the suite of back end components that need to be running for them to exercise and develop against the API services. They can quickly spin them up with a docker-compose run command to get started.

Host

The host is the underlying OS on which you will run Docker. Docker will utilise shared OS kernel resources to run your containers. Until recently the host would always have been a Linux device but Microsoft have now released Microsoft containers, so it’s possible to use a Windows device as a host for Windows based images. In our case we still wanted to support running the containers on Mac devices, so stayed with Linux based images. There are a couple of solutions to enable using the Linux images on Windows which I’ll go into more detail about in the future. In both cases, you essentially run Linux as a VM which is then your host.

Summary

This was a short post, intended to introduce Docker and some of the reasons that we started to use it for our latest project. ASP.NET Core and .NET Core lend themselves perfectly to cross platform development, and the use of Linux based Docker containers made sharing the back end components with our front end team a breeze. In the next posts I’ll go deeper into how we’ve structured our solutions and processes to enable the front end developers as well as showing building an example project.

Other Posts In This Series

Part 1 – This Post
Part 2 – Docker for .NET Developers Part 2 – Our First dockerfile
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images