Speaking at Progressive .NET Tutorials 2017

Back in early May I was listening to a .NET Rocks episode when Carl and Richard mention a conference called Progressive .NET Tutorials that they would be attending in London. I’m a big believer in attending conferences as I find that it’s to be a great way to learn new skills and network with similarly minded people. I headed over to the Progressive .NET website to check out the details about the event.

The full line-up was not complete when I made that first visit, but the keynote from Jon Galloway had me sold already. Then a section lower down the page caught my eye and there began the events leading to me speaking at my first conference! I’d noticed the call for papers signup form and having already been thinking about opportunities to do more technical speaking, I decided that I would submit a talk; in fact I ended up submitting two potential talks. After submitting, the wait began until after the closing deadline passed and the organisers began selecting speakers. Something that is great about Progressive .NET is that were welcoming to first time speakers as well as experienced ones.

About 3 weeks ago I received an email from Nicole at Skills Matter and was delighted to read that one of my talks had been selected for inclusion in the 2017 programme. I’ll be presenting a talk entitled, Docker for .NET Developers. In this talk I’ll be sharing with the audience a journey that our team at Madgex have been on; using Docker as we develop a new product. This has been our first experience with Docker and along the way we’ve learned a lot. We are now using Docker not only to simplify the developer workflow, but also for our build and deployment process. We are leveraging Amazon EC2 Container Services to host our production system which is built using .NET Core based microservices.

I’m really excited about the content I’m building for this talk. I’ll be sharing the things we’ve learnt about Docker in our experience so far. I’ll take a look at what Docker is, what problems it has helped us solve and how we implemented it in our developer and deployment workflows. Along the way we’ll take a look at some demos of what this looks like in code, exploring dockerfiles, docker-compose and our build and deployment process. I hope this talk will be useful to developers looking to begin using Docker in their own projects. We’ll start with the basics but by the end of the session I hope to show an example of how we do a build and live deploy, with zero downtime into AWS. It’s a very exciting time to be a .NET developer and the release of the cross-platform .NET Core framework has opened the door to this exciting new application platform. I’m excited to see where the new framework takes us in the next 12 months as we see the release of .NET Core 2.0. Architectures and strategies around microservices and serverless are also creating some interesting new ways for developers to think about building new applications.

Over the years I’ve learned a lot from the .NET community, reading blog posts, listening to podcasts and watching videos. I have a passion for learning new things and investigating new techniques. Watching talks from many great speakers has influenced my personal growth, shown me new ideas and helped me develop my technical skills. I’m looking forward to the chance to begin participating more directly by talking about some of the things we’ve learned about Docker and Amazon ECS. I will be giving it my all to present a fast paced, information packed talk about Docker and hopefully help other developers to begin using it. I’m extremely passionate about what I do and the chance to share that with an audience of peers at this event is very exciting.

I’m extremely honoured and privileged to be joining such an amazing line-up of world class experts at this event. Many of the of speakers are people whom I follow and who share great content in their areas of expertise. It’ll be exciting to attend some of their sessions and learn more about all of the other great topics being discussed at this event.

I look forward to the opportunity to meet some fellow developers at the conference. If you’re free on September 13th – 15th then I do recommend that you take a look at the website and buy yourself a ticket. I hope to see some of you there, please do come and say hi!

Docker for .NET Developers Header

Docker for .NET Developers (Part 6) Using Docker for Build and Continuous Deployment

In part 3 I discussed one of the first motivations which led our team to begin using Docker. That motivation was focused on making the workflow for our front end developers quicker and simpler. In this post I want to explore a second motivation which led to us fully embrace Docker on our project. Just like part 3, this post doesn’t have any code samples; sorry! Instead I want to share the thought process and concepts from the next phase of our journey without going too technical. I believe this will give the next parts in this series a better context.

Why deploy with Docker?

Once we had Docker in place and we were reaping the benefits locally, we started to think about the options we might have to use Docker further along our development lifecycle, specifically for build and deployment.

Our current job board platform follows continuous delivery. Every time a developer checks in, the new code is picked up by the build system and a build is triggered. After a few minutes the code will have been built and all tests run. This helps to validate that the changes have not broken any existing functionality.

Deployments are then managed via Octopus deploy which will take the built code and deploy it onto the various environments we have. Code will be deployed onto staging and within that environment our developers have a chance to do some final checking that the new functionality is working as expected. Our testing team have the opportunity to run regression testing against the site to validate that no functionality has been broken. Once the testing is complete, the code is triggered for deployment onto our production environment. This is a manual, gated step which prevents code releasing without a developer or developers validating it first.

That flow looks like this:

Existing Continuous Delivery Flow

With our new project we agreed that ideally we wanted to get to a continuous deployment flow, where code is checked in, tested and deployed straight to live. That sounds risky I know and was something we weighed up carefully. A requirement of this approach is that we can fail fast and rapidly deploy a fix or even switch back to a prior version should the situation require it (we can get a fix to live in about ~5 minutes). By building in smaller discrete microservices we knew we would be reducing the complexity of each part of the system and could more easily test them. We are still working out some additional checks and controls that we expect to implement to further help prevent errors slipping out to live.

At the moment this involves many unit tests and some integration tests within the solutions using the TestHost and TestServer which are part of ASP.NET Core. I’m starting to think about how we could leverage Docker in our build pipeline to layer in additional integration testing across a larger part of the system. In principle we could spin up a part of the system automatically and then trigger endpoints to validate that we get the expected response. This goes a step further than the current testing as it tests a set of components working together, rather than in isolation.

One of the advantages that Docker provides is simplified and consistent deployments. With Docker, your create your images and these then become your unit of deployment. You can push your images to a container registry and then deploy that image to any Docker host you want. Because your image contains your application and all of its dependencies you can be confident that once deployed, your application will behave in the same way as it did locally.

Also, by using Docker to run your applications and services, you no longer need to maintain dependencies on your production hosts. As long at the hosts are running Docker, there are no other dependencies to install. This also avoids conflicts arising between applications running on the same host system. In a traditional architecture, if a team wants to deploy an application on the same host, requiring a newer version of a shared dependency, you may not be able to upgrade the host without introducing risk to the existing application.

Using Docker for Builds

In prior posts we’ve seen that we can use the aspnetcore-build image from Microsoft to perform builds of our source code into a final DLL. This opens the door to standardise the build process as well. We now use this flow for our builds, with our Jenkins build server being used purely to trigger the builds inside Docker. This brings similar benefits as I described for the production hosts. The build server does not need to have the ASP.NET Core SDK installed and maintained. Instead, we just need Docker and then can use appropriate build images to start our builds on top of all of the required dependencies. Using this approach we can benefit from reliable repeatability. We don’t have to worry about an upgrade on the build server changing how a build behaves. We can build applications that are targeting different ASP.NET Core versions by basing them on a build image that contains the correct SDK version.

Some may raise a question over what the difference is between Docker and Octopus or Docker vs Jenkins. They all have overlapping concerns but Docker allows us to combine the build process and deployment process using a single technology. Jenkins in our system triggers builds inside Docker images and we then ship the built image up to a private container registry (we use Amazon ECR which I’ll look at soon).

Octopus is a deployment tool, it expects to take built components and then handles shipping them and any required configuration onto deployment targets. With Docker, we ship the complete application, including dependencies and configuration inside the immutable Docker image. These images can be pulled and re-used on any host as required.

Why Jenkins?

In our case there was no particular driver to use Jenkins. We already had access to Jenkins running on a Linux VM within our internal network and saw no reason to try out a new build server. We asked our systems team to install Docker and we then had everything we needed to use this box to trigger builds. In future posts I’ll demonstrate our build scripts and process. I’m sure that most of the steps will translate to many other common build systems.

Hosting with AWS

A final decision that we had to make was around how we would host Docker in production. At the time our project began we were already completing a migration of all of our services into AWS. As a result, it was clear that our final solution would be AWS based. We had a look at the options and found that AWS offered a container service which is called Amazon ECS.

The options for orchestrating Docker are a little daunting, and at this time I haven’t explored alternative solutions such as DC/OS or Kubernetes. I’ve not personally explored them at this stage. Like Amazon ECS they are container orchestration services that schedule containers to run and maintain the required state of the system. They include things like container discovery to allow us to address and access the services we need. Amazon ECS is a managed service that abstracts away some of the complexities of setting these systems up and managing them. However, this abstraction comes with the cost of some flexibility.

With AWS ECS we can define tasks to represent the components of our system and then create services which maintain a desired count of containers running those tasks. Our production system is now running on ECS and the various components are able to scale to triggers such as queue length, CPU load and request volumes. In future posts I’ll dive into the details of how we’ve set up ECS. We now have created a zero downtime deployment process taking advantage of the features of ECS to start new version of containers, switching the load over when they are ready to handle requests.

Our current Docker based deployment flow looks like this:

Flow of a docker build

Developers commit into git locally and push to our internally hosted git server. Jenkins picks up on changes to the repository using the GitHub hook and triggers a build. We use a script to define the steps Jenkins will use, the resulting output is a Docker image. Jenkins pushes this image up to our private registry which is running in AWS on their EC2 Container Registry (ECR). Finally, Jenkins triggers an update on the Amazon container service to trigger starting new container instances. Once those instances are successfully started and passing the Application Load Balancer health checks, connections to the prior version of the containers are drained and those containers stopped and removed. We’ll explore the individual elements of this flow in greater depth in later blog posts.

Summary

In this post we have looked at a secondary motivation for using Docker in our latest project. We explored at a high level the deployment flow and looked at some of the practical advantages we can realise by using Docker images through the entire development and deployment lifecycle. We are still refining our approaches as we learn more but we found it fairly simple to get up to speed using Jenkins as a build system, via Docker. In the next set of posts I’ll dive into how we’ve setup that build process, looking at the scripts we use and the optimised images we generate to help improve start up time of containers and reduce the size of the images.

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – This post
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 5) Exploring ASP.NET Runtime Docker Images

So far in previous posts we’ve been looking at basic demo dockerfiles which use the aspnetcore-build base image. This is okay for testing but does present some issues for actual deployment.

One disadvantage of the build image is its size. Since it contains all of the elements needed to build .NET Core applications it is fairly bloated and not something we would want to be using as a unit of deployment. It contains things like the full .NET Core SDK (which itself includes MSBuild), Node.js, Grunt, Gulp and a package cache for the pre-restored .NET packages. In all, this accounts for an image of around 1.2GB in size. You have to consider the network traffic that pushing around such large Docker images will introduce. If you use an external container registry (we’ll talk about those in a later post) such as Docker Hub, you will have to ship up the full size of the large SDK based image each time something changes.

Dissecting the aspnetcore-build Image

While it’s not really necessary to know the intricate details of the composition of the aspnetcore-build image, I thought it would be interesting to look a little at how it’s put together. As I’ve described previously, Docker images are layered. Each layer generally adds one thing or a set of related things into an image. Layers are immutable but you can base off of the previous layers and add in your layer on top. This is how you get your application into an image.

The ASP.NET Core build image is built up from a number of layers.

Layer 1

Starting from the bottom there is an official Docker image called scratch which is an empty base image.

Layer 2

The next layer is the Debian Linux OS. The .NET Core images are based on Debian 8 which is also known as Jessie. The image is named debian and also tagged with jessie. You can find the source files here.

Its dockerfile is pretty basic.

It starts with the scratch base image and then uses the ADD statement to bring in the tarball containing the debian root file system. One important thing to highlight here is the use of ADD and not COPY. Previously in my samples we used COPY in our dockerfile to copy in contents from the source directory into a destination directory inside the image. ADD is similar but in this case it does one important thing, it will decompress known tar archives. Since the rootfs.tar.xz is a known tar type, its contents are uncompressed into the specified directory, extracting all of the core Debian file system. I downloaded this file and it’s 117Mb in size.

The final line CMD [“bash”] line provides a default command that will run when the container first executes. In this case it runs the bash command. CMD is different from RUN in that it does not execute at build time, only at runtime.

Layer 3

The next layer is buildpack-deps:jessie-curl – Source files are here.

On top of the base image this RUNs three commands. You’ll notice each command is joined with an &&. Each RUN line in a dockerfile will result in a new intermediate image during build. To combat this in cases where we are doing related work, the commands can be strung together under a single RUN statement. This particular set of commands is a pretty common pattern and uses apt-get, a command line tool for working with application packages in Debian.

This structure it follows is listed in the Docker best practices as a way to ensure the latest packages are retrieved. apt-get update simply updates the package lists for new package and available upgrades to existing packages. This technique is known as “cache busting”. It then installs 3 packages using apt-get install.

I had to Google a bit but ca-certificates installs common certificate authorities based on those that ship with Mozilla. These allow SSL applications to verify the authenticity of SSL connections. It then installs the package with curl, a command line tool for transferring data via the URL syntax. Finally wget is a network utility used to retrieve files from the web using HTTP(S) and FTP.

The backslashes is another common convention in production dockerfiles. The backslash is a line continuation character that allows a single line to be split over multiple lines. It’s used to improve readability and the pattern here puts each new package onto a new line so it’s easier to parse the individual packages that will end up being installed. The apt-get command allows multiple packages to be specified with a space between packages.

The final command removes anything in the /var/lib/apt/lists/ directory. This is where the updated package lists that were pulled down using apt-get update are stored. This is another good example of best practice, ensuring that no files remain in the image that are not needed at runtime helps keep the image size down.

Layer 4

The next layer is buildpack-deps:jessie-scm – Source files are also found here.

This layer uses a similar pattern to the layer before it to install some packages via apt-get. Most of these are packages for the common distributed version control applications such as git. openssh-client installs a secure shell (SSH) client, for secure access to remote machines and the procps package seems to be some file system utilities.

Layer 5

The next layer is the microsoft/dotnet layer which will include the .NET Core SDK bits. The exact image will depend on which tag you choose since there are many tagged versions for the different SDK versions. They only really differ in that they install the correct version for your requirements. I’ll look at the 1.1.2-sdk tagged image. You can find the source here.

This dockerfile has a few comments and those explain the high level steps. I won’t dive into everything as it’s reasonably clear what’s happening. First the .NET CLI dependencies are installed via apt-get. Again this uses the pattern we’ve seen earlier.

Next the .NET Core SDK is downloaded in a tar.gz format. This is extracted and the tar file then removed. Finally it uses a Linux link command to create a soft link between the directory /usr/share/dotnet/dotnet and /usr/bin/dotnet.

The final section populates the local Nuget package cache by creating a new dotnet project and then removing it and any scratch files which aren’t needed.

Layer 6

The final layer before you start adding your own application is the microsoft/aspnetcore-build image layer. Again, there are variances based on the different SDK versions. The latest 1.1.2 image source can be found here.

First it sets some default environment variables. You’ll see for example it sets ENV ASPNETCORE_URLS http://+:80 so unless you override the values using the WebHostBuilder.UseUrls extension your ASP.NET Core application will run under port 80 inside the container.

The next two steps do some funky node setup which I won’t dive into.

Next it warms up the NuGet package cache. This time it uses a packagescache.csproj which if you take a look simply includes package references to all of the main ASP.NET related packages. It then calls dotnet restore which will download the packages into the package cache. It cleans up the warmup folder after this.

Finally it sets the working directory to the root path so that the image is clean to start building on in the next layer which will include your application.

Runtime Images

Given the size of the build images and the fact that there’s no need to include the files used to build you application when you deploy it, it is much better practice to try and reduce the contents of your final image to make it as small as possible. It’s also important to optimise it for rapid start-up times. That’s exactly what the aspnetcore image is for. This image only contains the minimal .NET core runtime and so results in a much smaller base image size of 316MB. It’s about one quarter of the size of the build image! This means that it doesn’t include the SDK so cannot issue commands such as dotnet build and dotnet restore. It can only bootstrap compiled .NET core assemblies.

Dissecting the microsoft/aspnetcore image

As we did with the build image, we’ll take a look at the differences in the runtime image.

Layers 1 and 2

The first two layers than make up the final aspnetcore image are the same as with the build image. After the base debian:jessie layer though things differ.

Layer 3

This layer is named microsoft/dotnet and is tagged for the different runtime versions. I’ll look at the 1.1-runtime-deps tagged image which can be found here.

The docker file for this layer is:

This installs just the certificate authorities since we no longer get those from the jessie:curl image which is not used in the prior layers. It then also installs the common .NET Core dependencies.

Layer 4

This layer is named microsoft/dotnet and tagged 1.1.2-runtime which can be found here.

This image installs curl and then uses that to download the dotnet runtime binaries. These are extracted and the tar file removed.

Layer 5

The final layer before your application files, this layer is named microsoft/aspnetcore and tagged with 1.1.2 for the latest 1.1.x version. It can be found here.

Starting with the dotnet runtime image this sets the URL environment variable and populates the Nuget package cache as we saw with the build image. As explained in the documentation it also includes a set of native images for all of the ASP.NET Core libraries. These are intended to speed up the start up of the container since they are native images and don’t need to be JITed.

Using the Runtime Image

The intended workflow for .NET Core based ASP.NET Docker images is to create a final image that contains your pre-built files, and specifically only the files explicitly required by the application at runtime. This will generally be the dlls for your application and any dependencies.

There are a couple of strategies to achieve these smaller images. For this blog post I’m going to concentrate on a manual process we can follow locally to create a runtime-only image with our built application. It’s very likely that this is not how you’ll end up producing these images for real projects and real deployment scenarios, but I think it’s useful to see this approach first. In later blog posts we’ll expand on this and explore a couple of strategies to use Docker containers to build our code.

I’ve included a simple demo application that you can use to follow along with this post. It contains a single ASP.NET Core API project and includes a dockerfile which will define an image based on the lightweight aspnetcore image. If you want to follow along you can get the code from GitHub. Let’s look at the contents of the dockerfile.

Much of this looks very similar to the dockerfiles we’ve looked atin my previous posts, but with some key differences. The main one is that this dockerfile defines an image based on the aspnetcore image and not the larger aspnetcore-build image.

You’ll then notice that this dockerfile expects to copy in files from a publish folder. In order for this file to work, we will first need to publish our application to that location. To publish the solution I’m going to use the command line to run dotnet restore and then use the following command:

dotnet publish -c Release -o ../../publish

The output from running this command looks like this:

Microsoft (R) Build Engine version 15.3.117.23532
Copyright (C) Microsoft Corporation. All rights reserved.

DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\src\DockerDotNetDevsSample3\bin\Release\netcoreapp1.1\DockerDotNetDevsSample3.dll
DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\publish\

This command uses the .NET SDK to trigger a build of the application and then publishes the required files into the publish folder. In my case this produces the publish output and copies it to a folder named publish in the root of my solution, the same location as my dockerfile. I do this by passing in the path for the published output using -o. We also set it to publish in release mode using the -c switch to set the configuration. We’ll pretend we’d use this image for a deployment somewhere to production, so release makes sense.

Now that we have some files in our publish folder, the main one being the dll for our assembly we will be able to use those files inside our image.

Back to the dockerfile, after copying all of the published files into the container you’ll notice that we no longer need to run the dotnet restore and dotnet build commands. In fact, trying to do so would fail since the base image does not include the SDK, these commands would not be known. We already have our restored and built files which we copied into the image.

The final difference you will see is that the entrypoint for this image is a bit different. In our earlier examples we used dotnet run in the working directory containing our csproj file. Again this relied on the SDK which we don’t have. This dockerfile uses dotnet.exe directly against the DockerDotNetDevsSample3.dll. dotnet.exe will bootstrap and fire into the main method of our application.

Let’s build an image from this dockerfile and take a look at what happens.

docker build -t dockerdemo3 .

The output looks like this:

Sending build context to Docker daemon 33.14 MB
Step 1/4 : FROM microsoft/aspnetcore:1.1
---> 3b1cb606ea82
Step 2/4 : WORKDIR /app
---> c20f4b67da95
Removing intermediate container 2a2cf55d8c10
Step 3/4 : COPY ./publish .
---> 23f83ca25308
Removing intermediate container cdf2a0a1c6c6
Step 4/4 : ENTRYPOINT dotnet DockerDotNetDevsSample3.dll
---> Running in 1783718c0ea2
---> 989d5b6eae63
Removing intermediate container 1783718c0ea2
Successfully built 989d5b6eae63
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

As you can see, the docker build is very quick this time since it is not running the restore and .NET build steps. It’s grabbing the pre-built files and setting up the entry point.

I can now run this image using

docker run -d -p 8080:80 dockerdemo3

Navigating to http://localhost:8080/api/values we should see the data from the API.

Summary

In this post we’ve looked in some detail at the layers that make up the aspnetcore-build image and compared them to the layers in the aspnetcore image which just includes the .NET Core runtime. We’ve then seen how we can generate a small runtime based image for our own application which will be much smaller and therefore better to store in a remote registry and quicker to pull down. In future posts we’ll look a other methods that allow us to use Docker for the build and publish steps within a build system, as well as looking at some other things we can do to ensure we minimise the file size of our layers of the image.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – This post
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 4) Working with docker-compose and multiple ASP.NET Core microservices

In Part 3 we looked at one of the motivations behind using Docker with ASP.NET Core to enable simpler developer processes. In this post we’ll them look at using docker-compose files to define and run multi-container systems. I will take a look at how we can build a sample application, containing two services that are spun up at the same time with a single command.

Working with docker-compose

This will be a simplified example of what we are doing with our front end teams on our project. To keep it easy to work with, I’ll include everything in the same solution and the focus will be mostly on the docker-compose format and how using docker-compose can simplify starting and managing multi-container systems. If you want to try out the code above for yourself I have uploaded the source from this post to GitHub.

This solution contains two ASP.NET Core API projects. One simulating a back end API service and another simulating a front end API service. This is a simplified view on what we have in our real system. Our front end developers need to call to the front end API service in order to gather some data to expose on the UI. In this example we can pretend that the back end API is providing some data to the front end API, which it will enrich and reformat in order to provide final output.

In our real system, we have each service as a separate repository and Visual Studio solution. We have a specialised query API which interacts with ElasticSearch and provides this data for consumption by multiple services. One of those services is our report API which translates the returned data into a model suitable for use by the front end. In our first iteration of the development flow we have our front end developers pulling the latest source for each of the APIs systems but currently we are reworking that to produce a more optimal flow. However, it’s still very useful to know how to use docker-compose to start up and maintain multiple, related Docker containers.

If you take a look in each of the projects you’ll see I’ve included a dockerfile in the root. These dockerfiles are very much like the one I demonstrated in detail in part 2 of this series. They use the aspnetcore-build Docker image to build and then run our application code. Again, this is not exactly how you will want to end up building your images for deployment in the real-world, but it works for this example.

The docker-compose file structure

If you take a look in the root of the solution, I’ve included a docker-compose.yml file.

Docker compose as described in the Docker documentation is a command line tool which allows us to define and run multi-container Docker applications. It’s a yaml based file where you can define one or more services. Each service will be started as a container and using this file you can provide a practical way for someone to start up a set of related containers, with defined dependencies between each component.

If we take a look at the docker-compose file in my sample it looks like this:

First we specify a version for the docker-compose file. I’m using a fairly recent 3.2 version. This simply defines the features available in the docker-compose file.

It then has a services node. It’s in here that we will create a definition for each service in our system. The first one I define is a backend-service. The build element allows me to define the context. This is the path to the directory where the dockerfile resides for that service. In my case I navigate to the path of the back end project, which is where I located the dockerfile for that API.

In this docker-compose file I provide an explicit name for this container that will be used when starting it up. That’s all there is for the back end service.

The next service I define is the front end service. This looks similar to the back end one, but includes some extra properties. The first is the ports option. This allows me to expose ports from the container out to the host which runs it. As this is the front end API and we expect to be able to call it via the host, we must provide a publicly exposed port on the host machine, otherwise we would not be able to access it. We define this with host port (the port to use on the host) and then the container port (the internal port the application runs on inside the container). These are separated with a colon.

You may be wondering why we didn’t do this on the back end service. The front end service needs to talk to that in order to retrieve data, so how is that working? This is where some magic happens. Docker provides a DNS system used to locate containers and docker compose sets up a Docker network for your defined services. All containers that are part of the docker-compose services definition are reachable by other containers on that network. This means that we don’t have to explicitly expose the back end port for the containers to communicate. As we’re not expecting to allow external systems to call the back end API it’s best to avoid exposing it’s port on the host. I can limit my exposure to only the single front end port that I need to expose.

The next configuration option I define is the environment section. This allows us to set environment variables inside the container that will be started using this docker-compose file. On my front end service I used the standard ASP.NET Core configuration code to define a setting for the path to the back end service. This is required as this will be different on development and production. The ASP.NET Core configuration is layered, and although it will have loaded a value for the back end setting from the appsettings.json file, it can be overridden using an environment variable. This powerful configuration structure compliments Docker very nicely since we can always specify the required environment variables for each running container.

You’ll note I’ve used http://backend-service:80 for the value of the back end endpoint setting. How does that work? Remember that I mentioned that Docker has its own DNS and that docker-compose creates a network for the set of services. This allows me to address the other containers using the name I gave the service. Note that isn’t the container name, it’s the name from the first element of the service. The Docker DNS system will resolve this name between the containers. I use the port that the service runs on inside the container which by default will be port 80 for the ASP.NET Core API.

The final new element I’ve included is the depends_on section. This allows me to define a child/parent dependency for the containers. Here I state that the front end service depends on the back end service. It will be started after the back end is running. On our project we use this to start up a Postgres database as part of our chain of services. Our front end APIs depend on the database being started first in order to startup and seed their data. That’s all we need to do to define our basic docker-compose file.

docker-compose Commands

Now that we have our docker-compose file we can use it to start up our containers. We can do this by dropping into a command line at the path where our docker-compose file is defined. There are a couple of command we can now use. I’ll start with:

docker-compose up -d

This will inspect the docker-compose file in the root of the path where the command is run. It will build and start up a container for each of the defined services. The -d argument specifies that the containers start in detached mode and run in the background. If you choose to leave this off you will see a combined streamed output from the console for each of the containers. If there are no images available for those containers, they will be built automatically.

This single command provides a really handy way to start up multi-container systems. While my example is quite simple, having only two services, our real world front end docker-compose file contains 4 API based services, one Postgres database container and sometimes even an ElasticSearch container. Each of those can be built and run manually using the individual Docker commands against their respective dockerfile, but it’s much nicer for our front end developers to use docker-compose and a single command.

I mentioned there were other commands we can use. While docker-compose up will build images for us if they do not already exist, once images exist, it will keep using those images. If we make changes to the source files and expect new images to be built we can use docker-compose build to explicitly build the new images. This will only build them, and will not start any containers. You can also do a build and up in a single command using docker-compose up --build. This will build new images and then start all of the containers.

I also want to draw attention to the --no-cache option as well. This will force no caching of the image layers to will recreate each layer again. I’ve found this useful when testing a docker-compose file change to ensure all steps are working as expected.

Once containers are running you might wonder how we can stop them. You can use docker-compose stop to stop all of the containers defined in the docker-compose file. To cleanup you can use docker-compose rm to remove the stopped containers. There is also docker-compose down if you would like to stop and then remove the containers using a single command.

Summary

In this blog post I have described a real-world scenario that we encountered and which I hope demonstrates a benefit of using Docker during development. By including Docker in our developer flow we have negated the need for front end developers to use a Windows VM and to manually be responsible for using an IDE such as Visual Studio to build our .NET code. Builds are quicker and there are very few dependencies required to get a new front end developer on-boarded to our project. We have no complex “setup your developer machine” documents containing specific versions of application, registry settings and IIS configuration.

In particular we focused on how this is useful for multiple microservice based architectures where often a single working system may be built from many smaller services. We used docker-compose to define our set of services and using a single command we’re able to start the components on a developer machine. Using this approach we can define and co-ordinate the elements making up the system so that front end developers can run the environment on their devices with ease.

Finally we explored the structure of a simple docker-compose file, looking at how we can define the services, set container environment variables and expose ports.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – This post
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 3) Why we started using Docker with ASP.NET Core

In the prior two posts in this series we took a look at the main Docker terminology and looked at creating a basic dockerfile to define a Docker image containing an ASP.NET Core API application. In this post I want to switch it up a bit and spend a little time sharing a specific practical reason that led our team to start using Docker in our development flow. This was our first step in the Docker journey that we now find ourselves on and expands on the summary I introduced in part 1.

Why we started using Docker

Warning! This post has no code examples. Instead what I’ll be describing is a real-world situation that led us to start using Docker for our new project. I want to highlight why Docker was a good choice given our requirements and hopefully you’ll find cases in your work where it may be relevant to consider Docker yourself. I think it’s useful to share this story, while there’s a lot of excitement about Docker and some clear benefits, it’s useful to reflect on a practical advantage it brought us with very little effort. This is simply the start of the journey and I will be expanding on how our use evolved as the series continues. I hope you find this part useful, but don’t worry, I won’t be offended if you want to skip onto the docker-compose examples in part 4! See, I even provided a link!

As I explained in the first post, our new greenfield project at work kicked off in 2016. We needed to build a system to provide data analytics over a very large data set of event based data. I won’t cover the architecture of the back end components at this stage. For this post it’s the front end where I want to demonstrate the first benefit we were able to gain from using Docker.

At work, our front end and back end teams are split and work on their areas of expertise accordingly. On our current platforms, we have an in-house MVC framework which uses a custom templating language for the front end pages. When our front end developers need to perform work on the UI, they must pull down and update the main platform code on their devices in order to spin up the site locally. Given that this is a large platform, the size of the changes they need to pull could be fairly large. In order to work with the site, they must compile the code and run it locally, which, as this platform is built on the full .NET framework, requires Windows and IIS. Therefore, each developer, most of whom use a Mac, must have a Windows VM installed, or access to a Remote VM in order to work with the code.

Over time this flow has evolved to make the process as efficient as possible but it’s still not an ideal solution. With this new project we had the opportunity to try a new more modern approach. Given that the UI for this analytics style application was very interactive, we decided that it made sense to develop a SPA style front end. This had the benefit that it enabled the front end developers to choose a technology that suited them. After a bit of research they landed on Vue.js.

In order to support the front end SPA, we agreed that we would provide a number of REST API services, each providing a specific, bounded functionality. We could have chosen to build one larger, all-encompassing API application, but decided that the smaller APIs made scaling each component much easier. Scaling would be dependent on their own unique load and allow us to better use the resources of the server. It also means we can keep the code separated and hopefully easier to maintain. We can change each component independently and work more rapidly.

We ended up with a design that included 3 web facing API services, each backed by its own database. We have a user API which handles authentication and authorisation for the application, as well as other user management services. We have a report API that enables the creation and running of reports and we have a schedule API used for creating scheduled reports and managing downloads. Finally, we have a back end API which is responsible for querying over ElasticSearch, processing the data and returning the resulting data. This query API is used by both the reporting API and our offline back end scheduled report service.

Here is a diagram showing the main components our of front end services architecture:

Docker microservice architecture

At the time we were creating our architecture, ASP.NET Core was in RC and we took a brave but in the long run, good decision to use the new framework to develop our APIs. Not only were we interested in the performance and framework improvements, but we also knew we could begin to take advantage of the cross platform nature of .NET Core to allow us to look at different hosting options. Having recently started moving our systems into AWS we initially were considering the possibility of using Linux VMs to host the APIs.

As we pursued this path, another benefit of this decision came to light. Now that we were cross platform and planning to target Linux for hosting, we realised we could start to look at Docker and containerisation as part of our workflow. This opened up some new possibilities and after talking through the concept we realised that we could make development for the front end team much easier by providing Docker images to run the API code. The big change this presented is that they would no longer need to run Windows in order to build the UI for the platform and they would no longer need Visual Studio in order to build the source code.

A downside of splitting things into smaller microservices is that in order for a system to function, you often need multiple things running together at the same time. In our case, each of the 4 main API services needed to be running for the front end to be able to fully function. Each of those also needed a database server, we had chosen Postgres, to support the data storage requirements. Spinning up all of these parts needed a little coordination.

When we started looking at Docker we realised that we could solve this problem using docker-compose files. Docker-compose provides a simple way to start multiple containers with a single command. In our case, each service is a separate Visual Studio solution and repository. This allows us to develop, build and deploy each part of the system individually. As long as the public facing API endpoints do not change then we can update the internals of each API with no impact on the other parts of the system. In each solution we include a dockerfile to describe a Docker image that that will run the API service. These dockerfiles initially started very much like the example I showed in part 2 of this series. We have since gone on to optimise the files and I plan to explore that in a future post.

In addition to the dockerfiles we also provide a docker-compose file which, with a single command, can be used to build all of the required images. With a second command we can start all of the containers needed to support the front end. Our first iteration of the front end workflow relied on the front end developers pulling the latest source from each repository. They could then use the docker-compose build command which triggered image creation. Since the build is happening inside the Docker containers, they do not need to run Windows or even have the .NET SDK installed on their Macs. As part of the solution, we also utilised a public Docker image for Postgres as one of the services defined within our docker-compose file. This means that the front end team do not need to install Postgres on their own device either. Other than Docker, there are no dependencies to run the back end services required to develop the front end website. With these steps we had removed a barrier for the front end team and very much simplified our development process.

In the second iteration that we are now working on for the front end workflow we are providing the front end team with a new docker-compose file which pulls images from a private container registry running in AWS ECR. With this change there is no need for the front end developers to ever pull or update the source for the components they need. Instead, the compose file will pull the latest available images from the registry and have them up and running extremely quickly. We’re still investigating and testing how we want to finalise this part of our development workflow so it’s something I’ll share in a future post.

The same process proved really useful for QA and testing as well. Anyone involved with testing the system could pull the components down and run the full system in isolation on their machine. The front end website is also containerised and in this case we even used an ElasticSearch Docker image to allow a full end to end system to be started and tested on a machine, with few external dependencies. The dependencies we could not containerise were things such as access to Amazon AWS SQS and S3 for example.

One by-product of leveraging Docker for the front and back end developer flow is that we can start to eliminate the “it worked on my machine” arguments. By loading everything inside Docker we get to a place, where everyone is working on an identical, repeatable environment. Because our image contains all of the dependencies, we can be sure we have matching versions and configuration every time it is run. We don’t have to ask developers to maintain correct versions of dependencies on their devices either.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – This post
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry