Docker for .NET Developers Header

Docker for .NET Developers (Part 7) Setting up Amazon EC2 Container Registry

In the previous part of this series I have discussed the reasons behind our decision to use Docker to run our product in production, using it to build and deploy images onto Amazon ECS. The next parts of this series will focus in on how we have achieved that and demonstrate how you can set up a similar pipeline for your systems. We’ll look at how we run our builds inside Docker, via Jenkins and how we deploy those images into AWS.

I’ll dive into some AWS ECS topics in these upcoming posts, discussing what it is and what it does in more detail. For now I want to start with one related part of ECS called the Amazon EC2 Container Registry (ECR). The reason I wanted to start with this is that I will soon be demonstrating our build process and a key part of that is pushing our images up to a registry.

What is a registry?

A container registry is ultimately just a store for images. For .NET developers, it’s quite a similar concept to Nuget. In the same way Nuget hosts packages, a registry hosts Docker images. As with Nuget, you can use a private registry, share your image publicly via a public registry or even host a local registry on your machine (inside Docker of course). Images inside a registry can be tagged so you can support a versioning system for your images. Microsoft use this technique for their aspnetcore images. While they share the same name, there are variants of the images tagged with different versions to indicate the version of the SDK or runtime they include.

The main reason we need a registry is to allow other people or systems to access and pull our images. They’d not be much use otherwise! For our system we started with a private registry running on Amazon ECR. The main reason we chose ECR was the fact that we would also be using AWS ECS to host our production system. ECR is a managed container registry which allows storing, managing and deploying Docker images.

Creating an EC2 Container Registry via the Console

In this post I want to share the steps I used to setup a container repository in ECR. It’s a quite straightforward process and takes only a few minutes. If you want to follow along you can sign up for AWS and try out the steps. Currently ECR is free for the first 500MB of images stored in it so there should be no cost in following along.

AWS ECR is a subset of the main ECS service so appears as repositories on the ECS menu. A Docker registry is the service that stores images, a repository refers to a collection of Docker images sharing the same name. If you’ve not use the service before you can click Get Started to create a new repository.

AWS ECR Getting Started 1

You are guided through the process, following a setup wizard. The first thing you must provide is a name for the repository to help identify it. You can optionally include a namespace before the repository name which would allow you to have more control over the naming. For example, you may choose to have a dev/dockerdemo repository and a prod/dockerdemo repository.

AWS will generate a unique URI for the repository which will be used when tagging images you want to push to it and for pulling images from it. As per the screenshot, permissions will be setup for you as the owner. You can provide more granular access to a registry later on. For example, your build server will need to use an account with push permissions, but developers may only need pull permissions.

AWS ECR Getting Started 2

After clicking Next Step your registry will be created. At the next screen you will be shown sample commands you can use to deploy images into the registry. I won’t go into those for now since we’ll look at them properly when we explore our Jenkins build script.

 AWS ECR Getting Started 3

Continuing on, you are taken into the repository which will currently be empty. Once you start pushing images to this registry you will see details of the images appear.

AWS ECR Getting Started 4

At this point you have an empty repository that can be used to store Docker images by pushing them to it. We’ll look at the process in future posts.

Using the CLI

While you can use the console to add repositories, you can also use the AWS CLI as well. You will first need to install the AWS CLI and configure it, providing a suitable access key and secret. Those will need to belong to a user with a suitable permissions in AWS to create repositories. For my example I have a user assigned to AmazonEC2ContainerRegistryFullAccess.
We can now run the following AWS command to create a new repository called “test-from-cli”

aws ecr create-repository --repository-name test-from-cli

If this succeeds you should see output similar to below

{

    "repository": {

        "registryId": "865288682694",

        "repositoryName": "test-from-cli",

        "repositoryArn": "arn:aws:ecr:eu-west-2:865288682694:repository/test-from-cli",

        "createdAt": 1499798501.0,

        "repositoryUri": "865288682694.dkr.ecr.eu-west-2.amazonaws.com/test-from-cli"

    }

}

Summary

In this post we’ve explored how we can easily use the AWS console and CLI to prepare a repository inside the Amazon Container Registry. This will enable us to push up our images that we will later use within the container service to start up services. 

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – This post

Docker for .NET Developers Header

Docker for .NET Developers (Part 6) Using Docker for Build and Continuous Deployment

In part 3 I discussed one of the first motivations which led our team to begin using Docker. That motivation was focused on making the workflow for our front end developers quicker and simpler. In this post I want to explore a second motivation which led to us fully embrace Docker on our project. Just like part 3, this post doesn’t have any code samples; sorry! Instead I want to share the thought process and concepts from the next phase of our journey without going too technical. I believe this will give the next parts in this series a better context.

Why deploy with Docker?

Once we had Docker in place and we were reaping the benefits locally, we started to think about the options we might have to use Docker further along our development lifecycle, specifically for build and deployment.

Our current job board platform follows continuous delivery. Every time a developer checks in, the new code is picked up by the build system and a build is triggered. After a few minutes the code will have been built and all tests run. This helps to validate that the changes have not broken any existing functionality.

Deployments are then managed via Octopus deploy which will take the built code and deploy it onto the various environments we have. Code will be deployed onto staging and within that environment our developers have a chance to do some final checking that the new functionality is working as expected. Our testing team have the opportunity to run regression testing against the site to validate that no functionality has been broken. Once the testing is complete, the code is triggered for deployment onto our production environment. This is a manual, gated step which prevents code releasing without a developer or developers validating it first.

That flow looks like this:

Existing Continuous Delivery Flow

With our new project we agreed that ideally we wanted to get to a continuous deployment flow, where code is checked in, tested and deployed straight to live. That sounds risky I know and was something we weighed up carefully. A requirement of this approach is that we can fail fast and rapidly deploy a fix or even switch back to a prior version should the situation require it (we can get a fix to live in about ~5 minutes). By building in smaller discrete microservices we knew we would be reducing the complexity of each part of the system and could more easily test them. We are still working out some additional checks and controls that we expect to implement to further help prevent errors slipping out to live.

At the moment this involves many unit tests and some integration tests within the solutions using the TestHost and TestServer which are part of ASP.NET Core. I’m starting to think about how we could leverage Docker in our build pipeline to layer in additional integration testing across a larger part of the system. In principle we could spin up a part of the system automatically and then trigger endpoints to validate that we get the expected response. This goes a step further than the current testing as it tests a set of components working together, rather than in isolation.

One of the advantages that Docker provides is simplified and consistent deployments. With Docker, your create your images and these then become your unit of deployment. You can push your images to a container registry and then deploy that image to any Docker host you want. Because your image contains your application and all of its dependencies you can be confident that once deployed, your application will behave in the same way as it did locally.

Also, by using Docker to run your applications and services, you no longer need to maintain dependencies on your production hosts. As long at the hosts are running Docker, there are no other dependencies to install. This also avoids conflicts arising between applications running on the same host system. In a traditional architecture, if a team wants to deploy an application on the same host, requiring a newer version of a shared dependency, you may not be able to upgrade the host without introducing risk to the existing application.

Using Docker for Builds

In prior posts we’ve seen that we can use the aspnetcore-build image from Microsoft to perform builds of our source code into a final DLL. This opens the door to standardise the build process as well. We now use this flow for our builds, with our Jenkins build server being used purely to trigger the builds inside Docker. This brings similar benefits as I described for the production hosts. The build server does not need to have the ASP.NET Core SDK installed and maintained. Instead, we just need Docker and then can use appropriate build images to start our builds on top of all of the required dependencies. Using this approach we can benefit from reliable repeatability. We don’t have to worry about an upgrade on the build server changing how a build behaves. We can build applications that are targeting different ASP.NET Core versions by basing them on a build image that contains the correct SDK version.

Some may raise a question over what the difference is between Docker and Octopus or Docker vs Jenkins. They all have overlapping concerns but Docker allows us to combine the build process and deployment process using a single technology. Jenkins in our system triggers builds inside Docker images and we then ship the built image up to a private container registry (we use Amazon ECR which I’ll look at soon).

Octopus is a deployment tool, it expects to take built components and then handles shipping them and any required configuration onto deployment targets. With Docker, we ship the complete application, including dependencies and configuration inside the immutable Docker image. These images can be pulled and re-used on any host as required.

Why Jenkins?

In our case there was no particular driver to use Jenkins. We already had access to Jenkins running on a Linux VM within our internal network and saw no reason to try out a new build server. We asked our systems team to install Docker and we then had everything we needed to use this box to trigger builds. In future posts I’ll demonstrate our build scripts and process. I’m sure that most of the steps will translate to many other common build systems.

Hosting with AWS

A final decision that we had to make was around how we would host Docker in production. At the time our project began we were already completing a migration of all of our services into AWS. As a result, it was clear that our final solution would be AWS based. We had a look at the options and found that AWS offered a container service which is called Amazon ECS.

The options for orchestrating Docker are a little daunting, and at this time I haven’t explored alternative solutions such as DC/OS or Kubernetes. I’ve not personally explored them at this stage. Like Amazon ECS they are container orchestration services that schedule containers to run and maintain the required state of the system. They include things like container discovery to allow us to address and access the services we need. Amazon ECS is a managed service that abstracts away some of the complexities of setting these systems up and managing them. However, this abstraction comes with the cost of some flexibility.

With AWS ECS we can define tasks to represent the components of our system and then create services which maintain a desired count of containers running those tasks. Our production system is now running on ECS and the various components are able to scale to triggers such as queue length, CPU load and request volumes. In future posts I’ll dive into the details of how we’ve set up ECS. We now have created a zero downtime deployment process taking advantage of the features of ECS to start new version of containers, switching the load over when they are ready to handle requests.

Our current Docker based deployment flow looks like this:

Flow of a docker build

Developers commit into git locally and push to our internally hosted git server. Jenkins picks up on changes to the repository using the GitHub hook and triggers a build. We use a script to define the steps Jenkins will use, the resulting output is a Docker image. Jenkins pushes this image up to our private registry which is running in AWS on their EC2 Container Registry (ECR). Finally, Jenkins triggers an update on the Amazon container service to trigger starting new container instances. Once those instances are successfully started and passing the Application Load Balancer health checks, connections to the prior version of the containers are drained and those containers stopped and removed. We’ll explore the individual elements of this flow in greater depth in later blog posts.

Summary

In this post we have looked at a secondary motivation for using Docker in our latest project. We explored at a high level the deployment flow and looked at some of the practical advantages we can realise by using Docker images through the entire development and deployment lifecycle. We are still refining our approaches as we learn more but we found it fairly simple to get up to speed using Jenkins as a build system, via Docker. In the next set of posts I’ll dive into how we’ve setup that build process, looking at the scripts we use and the optimised images we generate to help improve start up time of containers and reduce the size of the images.

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – This post
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 1) An introduction to Docker for .NET developers

Two words you will very likely be used to hearing quite often within our community at the moment are “microservices” and “Docker”. Both are topics of great interest and are generating excitement for developers and architects. In this new series of blog posts I want to cover Docker, what it is, why it might be of interest and specifically look at what it means for .NET developers. As a little background; my experience with Docker started last year when we began building a new big data analytics system. The requirement was to gather many millions of events from hundreds of individual source systems into ElasticSearch, then provide a way for users to flexibly report over that data, in real time.

Without going deep into specifics, we developed a system of queues, input processors and multiple back end services which analyse the source data. These services work with the data, aggregating it, shaping it for custom reporting and we provide various API services used by a front end SPA UI written in Vue.js. In essence these are microservices since each one is small and self-contained, providing a specific set of related functionality.

We knew we wanted to use ASP.NET Core for the API elements and very soon after that decision, we realised we could also take advantage of the cross platform nature of .NET Core to support easier development for our front end team on their Mac devices.

Historically the front end developers have had to use a Windows VM to work with our projects written in .NET. They pull the latest platform code to their devices, build it and run it so that they can work on the UI elements. This process has some overhead and has been something that we have wanted to streamline for some time. With this fresh project we were able to think about and implement improvements in the process.

The solution that we came up with was to provide Docker images of the back end services that the front end developers could quickly spin up on their development environments. They can do this without the need to run a Windows VM and the result has been a great productivity gain.

As containers and Docker were working so well for use in development, we also decided to use them as our build and deploy process onto the live environment. We are using AWS ECS (EC2 Container Service) to run the containers in the cloud, providing a scalable solution to host the various components.

This is our first time working with Docker and it has been an interesting learning experience for everyone involved. We still have more to learn and I’m sure more ways we can improve the process even further but what we have now is already providing great gains for us.

What is Docker?

There are a lot of articles and videos available via a quick Google that discuss what Docker is. To try and distil the essence, Docker is a containerisation technology and application platform that lets us package and deploy an application or service as an isolated unit containing all of its dependencies. In over simplified terms it can be thought of as a very lightweight, self contained virtual machine.

Docker containers run on top of a shared OS kernel, but in an isolated way. They are very lightweight which is where they offer an advantage over traditional VMs. You can often make better use of the host device(s) by running more containers and better sharing the underlying resource. They have a lighter footprint, containing only the minimum dependencies that they require and they can share the host resources more effectively.

A Docker image can be as small as a few hundred megabytes and can start in a matter of a few seconds or even fractions of a second. This makes them great for scaling since extra containers can be started very rapidly in response to scaling triggers such as a traffic increase or growing queue. With traditional VM scaling you might have a few minutes wait before the extra capacity comes online, by which time the load peak could have caused some issues already.

Key Concepts

As this is an introduction post I wanted to summarise some of the core components and terms that you will need to know when beginning to work with Docker.

Image

A docker image can be considered a unit of deployment. Images are defined by the Docker files and once built are immutable. To customise an image further you can use it as the base image within your next dockerfile. Typically you store built images in a container registry which them makes them available for people to reference and run.

Container

A container is just a running instance of a Docker image. You start an image using the Docker run command and once started your host will have an instance of that image running.

Dockerfile

A dockerfile is how Docker images and the deployment of an application are described. It’s a basic file and you may only require a few lines to get started with your own image. Docker images are built up in layers. You choose a base image that contains the elements you need, and then copy in your own application on top. Microsoft provide a number of images for working with .NET Core applications. I’ll look into the ones we use in future posts.

The nice thing about using a simple text file to describe the images is that it’s easy to include the dockerfile in your repository under source control. We include various dockerfiles in our solutions that enable slightly different requirements.

Docker Compose

A Docker compose file is a basic way to orchestrate multiple images/containers. It uses the YML format to specify one or more containers that make up a single system, or part of a system. Within this file you specify the images that need to be started, what they depend on, what ports they should start under on the host etc. Using a single command you can build all of the images. With a second single command you can tell Docker to run all of the containers.

We use a docker compose file for our front end developers. We define the suite of back end components that need to be running for them to exercise and develop against the API services. They can quickly spin them up with a docker-compose run command to get started.

Host

The host is the underlying OS on which you will run Docker. Docker will utilise shared OS kernel resources to run your containers. Until recently the host would always have been a Linux device but Microsoft have now released Microsoft containers, so it’s possible to use a Windows device as a host for Windows based images. In our case we still wanted to support running the containers on Mac devices, so stayed with Linux based images. There are a couple of solutions to enable using the Linux images on Windows which I’ll go into more detail about in the future. In both cases, you essentially run Linux as a VM which is then your host.

Summary

This was a short post, intended to introduce Docker and some of the reasons that we started to use it for our latest project. ASP.NET Core and .NET Core lend themselves perfectly to cross platform development, and the use of Linux based Docker containers made sharing the back end components with our front end team a breeze. In the next posts I’ll go deeper into how we’ve structured our solutions and processes to enable the front end developers as well as showing building an example project.

Other Posts In This Series

Part 1 – This Post
Part 2 – Docker for .NET Developers Part 2 – Our First dockerfile
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry