Docker for .NET Developers Header

Docker for .NET Developers (Part 7) Setting up Amazon EC2 Container Registry

In the previous part of this series I have discussed the reasons behind our decision to use Docker to run our product in production, using it to build and deploy images onto Amazon ECS. The next parts of this series will focus in on how we have achieved that and demonstrate how you can set up a similar pipeline for your systems. We’ll look at how we run our builds inside Docker, via Jenkins and how we deploy those images into AWS.

I’ll dive into some AWS ECS topics in these upcoming posts, discussing what it is and what it does in more detail. For now I want to start with one related part of ECS called the Amazon EC2 Container Registry (ECR). The reason I wanted to start with this is that I will soon be demonstrating our build process and a key part of that is pushing our images up to a registry.

What is a registry?

A container registry is ultimately just a store for images. For .NET developers, it’s quite a similar concept to Nuget. In the same way Nuget hosts packages, a registry hosts Docker images. As with Nuget, you can use a private registry, share your image publicly via a public registry or even host a local registry on your machine (inside Docker of course). Images inside a registry can be tagged so you can support a versioning system for your images. Microsoft use this technique for their aspnetcore images. While they share the same name, there are variants of the images tagged with different versions to indicate the version of the SDK or runtime they include.

The main reason we need a registry is to allow other people or systems to access and pull our images. They’d not be much use otherwise! For our system we started with a private registry running on Amazon ECR. The main reason we chose ECR was the fact that we would also be using AWS ECS to host our production system. ECR is a managed container registry which allows storing, managing and deploying Docker images.

Creating an EC2 Container Registry via the Console

In this post I want to share the steps I used to setup a container repository in ECR. It’s a quite straightforward process and takes only a few minutes. If you want to follow along you can sign up for AWS and try out the steps. Currently ECR is free for the first 500MB of images stored in it so there should be no cost in following along.

AWS ECR is a subset of the main ECS service so appears as repositories on the ECS menu. A Docker registry is the service that stores images, a repository refers to a collection of Docker images sharing the same name. If you’ve not use the service before you can click Get Started to create a new repository.

AWS ECR Getting Started 1

You are guided through the process, following a setup wizard. The first thing you must provide is a name for the repository to help identify it. You can optionally include a namespace before the repository name which would allow you to have more control over the naming. For example, you may choose to have a dev/dockerdemo repository and a prod/dockerdemo repository.

AWS will generate a unique URI for the repository which will be used when tagging images you want to push to it and for pulling images from it. As per the screenshot, permissions will be setup for you as the owner. You can provide more granular access to a registry later on. For example, your build server will need to use an account with push permissions, but developers may only need pull permissions.

AWS ECR Getting Started 2

After clicking Next Step your registry will be created. At the next screen you will be shown sample commands you can use to deploy images into the registry. I won’t go into those for now since we’ll look at them properly when we explore our Jenkins build script.

 AWS ECR Getting Started 3

Continuing on, you are taken into the repository which will currently be empty. Once you start pushing images to this registry you will see details of the images appear.

AWS ECR Getting Started 4

At this point you have an empty repository that can be used to store Docker images by pushing them to it. We’ll look at the process in future posts.

Using the CLI

While you can use the console to add repositories, you can also use the AWS CLI as well. You will first need to install the AWS CLI and configure it, providing a suitable access key and secret. Those will need to belong to a user with a suitable permissions in AWS to create repositories. For my example I have a user assigned to AmazonEC2ContainerRegistryFullAccess.
We can now run the following AWS command to create a new repository called “test-from-cli”

aws ecr create-repository --repository-name test-from-cli

If this succeeds you should see output similar to below

{

    "repository": {

        "registryId": "865288682694",

        "repositoryName": "test-from-cli",

        "repositoryArn": "arn:aws:ecr:eu-west-2:865288682694:repository/test-from-cli",

        "createdAt": 1499798501.0,

        "repositoryUri": "865288682694.dkr.ecr.eu-west-2.amazonaws.com/test-from-cli"

    }

}

Summary

In this post we’ve explored how we can easily use the AWS console and CLI to prepare a repository inside the Amazon Container Registry. This will enable us to push up our images that we will later use within the container service to start up services. 

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – This post

Announcing .NET South East A new Brighton based .NET User Group

It’s been an exciting few weeks for me recently. First I was accepted to talk at two conferences in September, then our latest product at work went live, then I got a promotion at work and now I’ve decided to start a new .NET user group in Brighton which is call .NET South East.

Brighton based .NET South East user group logo

The idea of starting a meetup has been at the back of my mind for a little while now and after much consideration I decided that I should just go ahead and get on with it. I’ve setup a new group on meetup.com called .NET South East. I expect it will mostly be attended by developers living and working in Brighton but I’m hoping that we can encourage people to join from anywhere around Sussex.

Announcing the First Meetup

I’m very excited to be able to announce that the first meetup will be held on August 22nd. At that event I’ll be talking about Docker for .NET Developers. In this talk I will take you on a tour of Docker, a modern application packaging and containerisation technology that .NET developers can now leverage. I will share with you the Docker journey that our team at Madgex are on, exploring our motivations for using Docker. You will learn the core terminology .NET developers need to know to begin working with Docker and explore demos that show how you can start using Docker with your own ASP.NET Core projects. Finally, I will demonstrate how we have built a deployment pipeline using Jenkins and explore the AWS EC2 Container Services (ECS) configuration we have created to enable rapid, continuous delivery of our microservices.

Elmah.io have kindly provided sponsorship for this event in the form of a 6 month business license for their software. We will be holding a raffle at the end of the event for one lucky attendee to win this fantastic prize.

Why a User Group?

User groups are a place where like-minded people can come together to enjoy a common interest, sharing and learning about that interest together. I’ve attended a few general developer user group sessions and watched many more online and I always leave having learned something or with a take-away I could follow up on later. Even if it’s just the seed of an idea or something I’d like to try, it has been well worth my time. Along with the content from the speakers, it’s also a good chance to mix in with other developers and make contacts, share thoughts and ideas. Perhaps you’ll meet someone who can help with a problem you’ve been fighting recently!

I started working in Brighton nearly two years ago and since then I’ve kept an eye out for groups and talks to attend. The only .NET specific group I’ve found locally is Brighton ALT.NET which meets once a month to have open discussion about any topics that the attendees vote to talk about. It’s a great format and there’s a nice variation of topics and opinions from the community there. I’ve attended on a couple of occasions and plan to get along to more of their monthly events.

Some may wonder, why start a group if one already exists and it’s a fair question. What I’m proposing to introduce takes a different format to that of ALT.NET. I’m looking to bring in speakers from around the area, as well as hopefully further afield, giving them the chance to share a topic in depth with the audience. In many cases I expect the talks to be conference length, 45-60 minutes long although I’m sure we can accommodate shorter talks as well.

Recently I met up with Mike who organises the ALT.NET evenings to run the idea past him. I was conscious that he already has a good community of regular attendees and I didn’t want to upset the balance by trying to introduce this second group. Mike was very encouraging of the idea and agreed that he felt there was room for both groups to exist and thrive together, helping to strengthen the local .NET community.

I recently watched a very inspiring talk from Ian Cooper at NDC Oslo entitled, The .NET Renaissance. In that talk Ian highlights the historical decline of C# and .NET. Ian ended that talk with a call to action to everyone in our community to help create a renaissance of .NET. Together, we can do it and bring the change. It’s an pivotal time for .NET developers with the new .NET Core framework and the approach from Microsoft to embrace open source and community. Later this year, version 2.0 of .NET core will be released and at that point porting over older .NET framework projects should be even easier. I’m very much enjoying working with the new framework and sharing my experience in this blog and now at soon at some meetups and conferences. I’m excited to play my small part in helping move the #dotnetrenaissance forward. Please join us!

What’s Next?

I’m still finding my feet as I establish this new group and start planning the events. I’m working on the logistics of the arrangements that need to be in place. My employer Madgex have very kindly agreed to allow me to use their meeting room space for the meetups. We have three meeting rooms that can be opened up into one large area, with A/V equipment and seating available. Perfect for our needs! Located close to the centre of Brighton, the Madgex office should be in easy reach of developers wanting to participate.

Madgex have also kindly provided me funding to setup the meetup.com group so that I could start to gauge interest in starting a new group. Already I’ve had over 40 signups from people interested in the idea and I hope that many of those will be able to attend the meetups going forward.

Finding speakers was my main worry, but already I’ve been approached by a few people who have talks they can offer to present. I expect there are other potential speakers out there with content to share, but perhaps no outlet for it. If you’d like to come along and speak please do get in touch.

I’m still trying to decide what the best schedule for the meetups. Ideally I’d like to run them every month and about two weeks after the local ALT.NET meetup. To begin with I’m planning on every two months as we build up the interest and I make arrangements with enough speakers who can present at the meetups. We’ll judge this on interest and the logistics or organising everything.

Call for Attendees

I’d love to get as many developers from our community involved in the meetups and attending regularly. I really believe that they will be a great chance to learn about topics that are necessary for .NET developers to thrive. Let’s get together and share our passion for what we do. I do urge you to save the date and RSVP on meetup.com. Please do spread the word with friends and colleagues who may want to attend.

Call for Speakers

I’d love to hear from you if you have a talk you want to present. It would be great to hear from the many local developers we have in Brighton, sharing what they do and teaching others about technologies they are using. If you’re further afield, but able to travel, we’d love to have you. I’d love to welcome first time speakers to join us as well. I’ve only just begun speaking myself and I’m finding it to be a great experience that is teaching me a lot along the way. I’ve never been a confident public speaker, but have found that by diving in, I’m able to deal with that fear and share my passion. Please do get in touch and I’ll help in any way I can.

Call for Sponsors

We already have two fantastic sponsors on-board, Madgex are providing their meeting space for free and assisting with some of the costs to get the event up and running. Elmah.io are providing a license as a prize for one attendee to win. If you’re a company in a position to offer prizes or sponsorship to our new group to help us get off the ground, please do get in touch.

Conclusion

I’m excited to get started to try to do my part to help build on the .NET community here in Brighton. I’m learning as I go and developing my own skills to organise the meetup and network with peers. I’d like to offer a huge thanks to those who have helped me so far. I’ve had great support from other event organisers (Dan Clarke, Joe Woodward, Dylan Beattie, Derek Comartin), community members via Twitter, Madgex and the staff there and elmah.io. Thanks to Mike from ALT.NET for his support and input and a special thanks to Ben Wood, a talented designer at Madgex who is kindly helping to develop a brand identity and digital assets for the new group.

Speaking at Progressive .NET Tutorials 2017

Back in early May I was listening to a .NET Rocks episode when Carl and Richard mention a conference called Progressive .NET Tutorials that they would be attending in London. I’m a big believer in attending conferences as I find that it’s to be a great way to learn new skills and network with similarly minded people. I headed over to the Progressive .NET website to check out the details about the event.

The full line-up was not complete when I made that first visit, but the keynote from Jon Galloway had me sold already. Then a section lower down the page caught my eye and there began the events leading to me speaking at my first conference! I’d noticed the call for papers signup form and having already been thinking about opportunities to do more technical speaking, I decided that I would submit a talk; in fact I ended up submitting two potential talks. After submitting, the wait began until after the closing deadline passed and the organisers began selecting speakers. Something that is great about Progressive .NET is that were welcoming to first time speakers as well as experienced ones.

About 3 weeks ago I received an email from Nicole at Skills Matter and was delighted to read that one of my talks had been selected for inclusion in the 2017 programme. I’ll be presenting a talk entitled, Docker for .NET Developers. In this talk I’ll be sharing with the audience a journey that our team at Madgex have been on; using Docker as we develop a new product. This has been our first experience with Docker and along the way we’ve learned a lot. We are now using Docker not only to simplify the developer workflow, but also for our build and deployment process. We are leveraging Amazon EC2 Container Services to host our production system which is built using .NET Core based microservices.

I’m really excited about the content I’m building for this talk. I’ll be sharing the things we’ve learnt about Docker in our experience so far. I’ll take a look at what Docker is, what problems it has helped us solve and how we implemented it in our developer and deployment workflows. Along the way we’ll take a look at some demos of what this looks like in code, exploring dockerfiles, docker-compose and our build and deployment process. I hope this talk will be useful to developers looking to begin using Docker in their own projects. We’ll start with the basics but by the end of the session I hope to show an example of how we do a build and live deploy, with zero downtime into AWS. It’s a very exciting time to be a .NET developer and the release of the cross-platform .NET Core framework has opened the door to this exciting new application platform. I’m excited to see where the new framework takes us in the next 12 months as we see the release of .NET Core 2.0. Architectures and strategies around microservices and serverless are also creating some interesting new ways for developers to think about building new applications.

Over the years I’ve learned a lot from the .NET community, reading blog posts, listening to podcasts and watching videos. I have a passion for learning new things and investigating new techniques. Watching talks from many great speakers has influenced my personal growth, shown me new ideas and helped me develop my technical skills. I’m looking forward to the chance to begin participating more directly by talking about some of the things we’ve learned about Docker and Amazon ECS. I will be giving it my all to present a fast paced, information packed talk about Docker and hopefully help other developers to begin using it. I’m extremely passionate about what I do and the chance to share that with an audience of peers at this event is very exciting.

I’m extremely honoured and privileged to be joining such an amazing line-up of world class experts at this event. Many of the of speakers are people whom I follow and who share great content in their areas of expertise. It’ll be exciting to attend some of their sessions and learn more about all of the other great topics being discussed at this event.

I look forward to the opportunity to meet some fellow developers at the conference. If you’re free on September 13th – 15th then I do recommend that you take a look at the website and buy yourself a ticket. I hope to see some of you there, please do come and say hi!

Docker for .NET Developers Header

Docker for .NET Developers (Part 6) Using Docker for Build and Continuous Deployment

In part 3 I discussed one of the first motivations which led our team to begin using Docker. That motivation was focused on making the workflow for our front end developers quicker and simpler. In this post I want to explore a second motivation which led to us fully embrace Docker on our project. Just like part 3, this post doesn’t have any code samples; sorry! Instead I want to share the thought process and concepts from the next phase of our journey without going too technical. I believe this will give the next parts in this series a better context.

Why deploy with Docker?

Once we had Docker in place and we were reaping the benefits locally, we started to think about the options we might have to use Docker further along our development lifecycle, specifically for build and deployment.

Our current job board platform follows continuous delivery. Every time a developer checks in, the new code is picked up by the build system and a build is triggered. After a few minutes the code will have been built and all tests run. This helps to validate that the changes have not broken any existing functionality.

Deployments are then managed via Octopus deploy which will take the built code and deploy it onto the various environments we have. Code will be deployed onto staging and within that environment our developers have a chance to do some final checking that the new functionality is working as expected. Our testing team have the opportunity to run regression testing against the site to validate that no functionality has been broken. Once the testing is complete, the code is triggered for deployment onto our production environment. This is a manual, gated step which prevents code releasing without a developer or developers validating it first.

That flow looks like this:

Existing Continuous Delivery Flow

With our new project we agreed that ideally we wanted to get to a continuous deployment flow, where code is checked in, tested and deployed straight to live. That sounds risky I know and was something we weighed up carefully. A requirement of this approach is that we can fail fast and rapidly deploy a fix or even switch back to a prior version should the situation require it (we can get a fix to live in about ~5 minutes). By building in smaller discrete microservices we knew we would be reducing the complexity of each part of the system and could more easily test them. We are still working out some additional checks and controls that we expect to implement to further help prevent errors slipping out to live.

At the moment this involves many unit tests and some integration tests within the solutions using the TestHost and TestServer which are part of ASP.NET Core. I’m starting to think about how we could leverage Docker in our build pipeline to layer in additional integration testing across a larger part of the system. In principle we could spin up a part of the system automatically and then trigger endpoints to validate that we get the expected response. This goes a step further than the current testing as it tests a set of components working together, rather than in isolation.

One of the advantages that Docker provides is simplified and consistent deployments. With Docker, your create your images and these then become your unit of deployment. You can push your images to a container registry and then deploy that image to any Docker host you want. Because your image contains your application and all of its dependencies you can be confident that once deployed, your application will behave in the same way as it did locally.

Also, by using Docker to run your applications and services, you no longer need to maintain dependencies on your production hosts. As long at the hosts are running Docker, there are no other dependencies to install. This also avoids conflicts arising between applications running on the same host system. In a traditional architecture, if a team wants to deploy an application on the same host, requiring a newer version of a shared dependency, you may not be able to upgrade the host without introducing risk to the existing application.

Using Docker for Builds

In prior posts we’ve seen that we can use the aspnetcore-build image from Microsoft to perform builds of our source code into a final DLL. This opens the door to standardise the build process as well. We now use this flow for our builds, with our Jenkins build server being used purely to trigger the builds inside Docker. This brings similar benefits as I described for the production hosts. The build server does not need to have the ASP.NET Core SDK installed and maintained. Instead, we just need Docker and then can use appropriate build images to start our builds on top of all of the required dependencies. Using this approach we can benefit from reliable repeatability. We don’t have to worry about an upgrade on the build server changing how a build behaves. We can build applications that are targeting different ASP.NET Core versions by basing them on a build image that contains the correct SDK version.

Some may raise a question over what the difference is between Docker and Octopus or Docker vs Jenkins. They all have overlapping concerns but Docker allows us to combine the build process and deployment process using a single technology. Jenkins in our system triggers builds inside Docker images and we then ship the built image up to a private container registry (we use Amazon ECR which I’ll look at soon).

Octopus is a deployment tool, it expects to take built components and then handles shipping them and any required configuration onto deployment targets. With Docker, we ship the complete application, including dependencies and configuration inside the immutable Docker image. These images can be pulled and re-used on any host as required.

Why Jenkins?

In our case there was no particular driver to use Jenkins. We already had access to Jenkins running on a Linux VM within our internal network and saw no reason to try out a new build server. We asked our systems team to install Docker and we then had everything we needed to use this box to trigger builds. In future posts I’ll demonstrate our build scripts and process. I’m sure that most of the steps will translate to many other common build systems.

Hosting with AWS

A final decision that we had to make was around how we would host Docker in production. At the time our project began we were already completing a migration of all of our services into AWS. As a result, it was clear that our final solution would be AWS based. We had a look at the options and found that AWS offered a container service which is called Amazon ECS.

The options for orchestrating Docker are a little daunting, and at this time I haven’t explored alternative solutions such as DC/OS or Kubernetes. I’ve not personally explored them at this stage. Like Amazon ECS they are container orchestration services that schedule containers to run and maintain the required state of the system. They include things like container discovery to allow us to address and access the services we need. Amazon ECS is a managed service that abstracts away some of the complexities of setting these systems up and managing them. However, this abstraction comes with the cost of some flexibility.

With AWS ECS we can define tasks to represent the components of our system and then create services which maintain a desired count of containers running those tasks. Our production system is now running on ECS and the various components are able to scale to triggers such as queue length, CPU load and request volumes. In future posts I’ll dive into the details of how we’ve set up ECS. We now have created a zero downtime deployment process taking advantage of the features of ECS to start new version of containers, switching the load over when they are ready to handle requests.

Our current Docker based deployment flow looks like this:

Flow of a docker build

Developers commit into git locally and push to our internally hosted git server. Jenkins picks up on changes to the repository using the GitHub hook and triggers a build. We use a script to define the steps Jenkins will use, the resulting output is a Docker image. Jenkins pushes this image up to our private registry which is running in AWS on their EC2 Container Registry (ECR). Finally, Jenkins triggers an update on the Amazon container service to trigger starting new container instances. Once those instances are successfully started and passing the Application Load Balancer health checks, connections to the prior version of the containers are drained and those containers stopped and removed. We’ll explore the individual elements of this flow in greater depth in later blog posts.

Summary

In this post we have looked at a secondary motivation for using Docker in our latest project. We explored at a high level the deployment flow and looked at some of the practical advantages we can realise by using Docker images through the entire development and deployment lifecycle. We are still refining our approaches as we learn more but we found it fairly simple to get up to speed using Jenkins as a build system, via Docker. In the next set of posts I’ll dive into how we’ve setup that build process, looking at the scripts we use and the optimised images we generate to help improve start up time of containers and reduce the size of the images.

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – This post
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 5) Exploring ASP.NET Runtime Docker Images

So far in previous posts we’ve been looking at basic demo dockerfiles which use the aspnetcore-build base image. This is okay for testing but does present some issues for actual deployment.

One disadvantage of the build image is its size. Since it contains all of the elements needed to build .NET Core applications it is fairly bloated and not something we would want to be using as a unit of deployment. It contains things like the full .NET Core SDK (which itself includes MSBuild), Node.js, Grunt, Gulp and a package cache for the pre-restored .NET packages. In all, this accounts for an image of around 1.2GB in size. You have to consider the network traffic that pushing around such large Docker images will introduce. If you use an external container registry (we’ll talk about those in a later post) such as Docker Hub, you will have to ship up the full size of the large SDK based image each time something changes.

Dissecting the aspnetcore-build Image

While it’s not really necessary to know the intricate details of the composition of the aspnetcore-build image, I thought it would be interesting to look a little at how it’s put together. As I’ve described previously, Docker images are layered. Each layer generally adds one thing or a set of related things into an image. Layers are immutable but you can base off of the previous layers and add in your layer on top. This is how you get your application into an image.

The ASP.NET Core build image is built up from a number of layers.

Layer 1

Starting from the bottom there is an official Docker image called scratch which is an empty base image.

Layer 2

The next layer is the Debian Linux OS. The .NET Core images are based on Debian 8 which is also known as Jessie. The image is named debian and also tagged with jessie. You can find the source files here.

Its dockerfile is pretty basic.

It starts with the scratch base image and then uses the ADD statement to bring in the tarball containing the debian root file system. One important thing to highlight here is the use of ADD and not COPY. Previously in my samples we used COPY in our dockerfile to copy in contents from the source directory into a destination directory inside the image. ADD is similar but in this case it does one important thing, it will decompress known tar archives. Since the rootfs.tar.xz is a known tar type, its contents are uncompressed into the specified directory, extracting all of the core Debian file system. I downloaded this file and it’s 117Mb in size.

The final line CMD [“bash”] line provides a default command that will run when the container first executes. In this case it runs the bash command. CMD is different from RUN in that it does not execute at build time, only at runtime.

Layer 3

The next layer is buildpack-deps:jessie-curl – Source files are here.

On top of the base image this RUNs three commands. You’ll notice each command is joined with an &&. Each RUN line in a dockerfile will result in a new intermediate image during build. To combat this in cases where we are doing related work, the commands can be strung together under a single RUN statement. This particular set of commands is a pretty common pattern and uses apt-get, a command line tool for working with application packages in Debian.

This structure it follows is listed in the Docker best practices as a way to ensure the latest packages are retrieved. apt-get update simply updates the package lists for new package and available upgrades to existing packages. This technique is known as “cache busting”. It then installs 3 packages using apt-get install.

I had to Google a bit but ca-certificates installs common certificate authorities based on those that ship with Mozilla. These allow SSL applications to verify the authenticity of SSL connections. It then installs the package with curl, a command line tool for transferring data via the URL syntax. Finally wget is a network utility used to retrieve files from the web using HTTP(S) and FTP.

The backslashes is another common convention in production dockerfiles. The backslash is a line continuation character that allows a single line to be split over multiple lines. It’s used to improve readability and the pattern here puts each new package onto a new line so it’s easier to parse the individual packages that will end up being installed. The apt-get command allows multiple packages to be specified with a space between packages.

The final command removes anything in the /var/lib/apt/lists/ directory. This is where the updated package lists that were pulled down using apt-get update are stored. This is another good example of best practice, ensuring that no files remain in the image that are not needed at runtime helps keep the image size down.

Layer 4

The next layer is buildpack-deps:jessie-scm – Source files are also found here.

This layer uses a similar pattern to the layer before it to install some packages via apt-get. Most of these are packages for the common distributed version control applications such as git. openssh-client installs a secure shell (SSH) client, for secure access to remote machines and the procps package seems to be some file system utilities.

Layer 5

The next layer is the microsoft/dotnet layer which will include the .NET Core SDK bits. The exact image will depend on which tag you choose since there are many tagged versions for the different SDK versions. They only really differ in that they install the correct version for your requirements. I’ll look at the 1.1.2-sdk tagged image. You can find the source here.

This dockerfile has a few comments and those explain the high level steps. I won’t dive into everything as it’s reasonably clear what’s happening. First the .NET CLI dependencies are installed via apt-get. Again this uses the pattern we’ve seen earlier.

Next the .NET Core SDK is downloaded in a tar.gz format. This is extracted and the tar file then removed. Finally it uses a Linux link command to create a soft link between the directory /usr/share/dotnet/dotnet and /usr/bin/dotnet.

The final section populates the local Nuget package cache by creating a new dotnet project and then removing it and any scratch files which aren’t needed.

Layer 6

The final layer before you start adding your own application is the microsoft/aspnetcore-build image layer. Again, there are variances based on the different SDK versions. The latest 1.1.2 image source can be found here.

First it sets some default environment variables. You’ll see for example it sets ENV ASPNETCORE_URLS http://+:80 so unless you override the values using the WebHostBuilder.UseUrls extension your ASP.NET Core application will run under port 80 inside the container.

The next two steps do some funky node setup which I won’t dive into.

Next it warms up the NuGet package cache. This time it uses a packagescache.csproj which if you take a look simply includes package references to all of the main ASP.NET related packages. It then calls dotnet restore which will download the packages into the package cache. It cleans up the warmup folder after this.

Finally it sets the working directory to the root path so that the image is clean to start building on in the next layer which will include your application.

Runtime Images

Given the size of the build images and the fact that there’s no need to include the files used to build you application when you deploy it, it is much better practice to try and reduce the contents of your final image to make it as small as possible. It’s also important to optimise it for rapid start-up times. That’s exactly what the aspnetcore image is for. This image only contains the minimal .NET core runtime and so results in a much smaller base image size of 316MB. It’s about one quarter of the size of the build image! This means that it doesn’t include the SDK so cannot issue commands such as dotnet build and dotnet restore. It can only bootstrap compiled .NET core assemblies.

Dissecting the microsoft/aspnetcore image

As we did with the build image, we’ll take a look at the differences in the runtime image.

Layers 1 and 2

The first two layers than make up the final aspnetcore image are the same as with the build image. After the base debian:jessie layer though things differ.

Layer 3

This layer is named microsoft/dotnet and is tagged for the different runtime versions. I’ll look at the 1.1-runtime-deps tagged image which can be found here.

The docker file for this layer is:

This installs just the certificate authorities since we no longer get those from the jessie:curl image which is not used in the prior layers. It then also installs the common .NET Core dependencies.

Layer 4

This layer is named microsoft/dotnet and tagged 1.1.2-runtime which can be found here.

This image installs curl and then uses that to download the dotnet runtime binaries. These are extracted and the tar file removed.

Layer 5

The final layer before your application files, this layer is named microsoft/aspnetcore and tagged with 1.1.2 for the latest 1.1.x version. It can be found here.

Starting with the dotnet runtime image this sets the URL environment variable and populates the Nuget package cache as we saw with the build image. As explained in the documentation it also includes a set of native images for all of the ASP.NET Core libraries. These are intended to speed up the start up of the container since they are native images and don’t need to be JITed.

Using the Runtime Image

The intended workflow for .NET Core based ASP.NET Docker images is to create a final image that contains your pre-built files, and specifically only the files explicitly required by the application at runtime. This will generally be the dlls for your application and any dependencies.

There are a couple of strategies to achieve these smaller images. For this blog post I’m going to concentrate on a manual process we can follow locally to create a runtime-only image with our built application. It’s very likely that this is not how you’ll end up producing these images for real projects and real deployment scenarios, but I think it’s useful to see this approach first. In later blog posts we’ll expand on this and explore a couple of strategies to use Docker containers to build our code.

I’ve included a simple demo application that you can use to follow along with this post. It contains a single ASP.NET Core API project and includes a dockerfile which will define an image based on the lightweight aspnetcore image. If you want to follow along you can get the code from GitHub. Let’s look at the contents of the dockerfile.

Much of this looks very similar to the dockerfiles we’ve looked atin my previous posts, but with some key differences. The main one is that this dockerfile defines an image based on the aspnetcore image and not the larger aspnetcore-build image.

You’ll then notice that this dockerfile expects to copy in files from a publish folder. In order for this file to work, we will first need to publish our application to that location. To publish the solution I’m going to use the command line to run dotnet restore and then use the following command:

dotnet publish -c Release -o ../../publish

The output from running this command looks like this:

Microsoft (R) Build Engine version 15.3.117.23532
Copyright (C) Microsoft Corporation. All rights reserved.

DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\src\DockerDotNetDevsSample3\bin\Release\netcoreapp1.1\DockerDotNetDevsSample3.dll
DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\publish\

This command uses the .NET SDK to trigger a build of the application and then publishes the required files into the publish folder. In my case this produces the publish output and copies it to a folder named publish in the root of my solution, the same location as my dockerfile. I do this by passing in the path for the published output using -o. We also set it to publish in release mode using the -c switch to set the configuration. We’ll pretend we’d use this image for a deployment somewhere to production, so release makes sense.

Now that we have some files in our publish folder, the main one being the dll for our assembly we will be able to use those files inside our image.

Back to the dockerfile, after copying all of the published files into the container you’ll notice that we no longer need to run the dotnet restore and dotnet build commands. In fact, trying to do so would fail since the base image does not include the SDK, these commands would not be known. We already have our restored and built files which we copied into the image.

The final difference you will see is that the entrypoint for this image is a bit different. In our earlier examples we used dotnet run in the working directory containing our csproj file. Again this relied on the SDK which we don’t have. This dockerfile uses dotnet.exe directly against the DockerDotNetDevsSample3.dll. dotnet.exe will bootstrap and fire into the main method of our application.

Let’s build an image from this dockerfile and take a look at what happens.

docker build -t dockerdemo3 .

The output looks like this:

Sending build context to Docker daemon 33.14 MB
Step 1/4 : FROM microsoft/aspnetcore:1.1
---> 3b1cb606ea82
Step 2/4 : WORKDIR /app
---> c20f4b67da95
Removing intermediate container 2a2cf55d8c10
Step 3/4 : COPY ./publish .
---> 23f83ca25308
Removing intermediate container cdf2a0a1c6c6
Step 4/4 : ENTRYPOINT dotnet DockerDotNetDevsSample3.dll
---> Running in 1783718c0ea2
---> 989d5b6eae63
Removing intermediate container 1783718c0ea2
Successfully built 989d5b6eae63
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

As you can see, the docker build is very quick this time since it is not running the restore and .NET build steps. It’s grabbing the pre-built files and setting up the entry point.

I can now run this image using

docker run -d -p 8080:80 dockerdemo3

Navigating to http://localhost:8080/api/values we should see the data from the API.

Summary

In this post we’ve looked in some detail at the layers that make up the aspnetcore-build image and compared them to the layers in the aspnetcore image which just includes the .NET Core runtime. We’ve then seen how we can generate a small runtime based image for our own application which will be much smaller and therefore better to store in a remote registry and quicker to pull down. In future posts we’ll look a other methods that allow us to use Docker for the build and publish steps within a build system, as well as looking at some other things we can do to ensure we minimise the file size of our layers of the image.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – This post
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry