Implementing IHostedService in ASP.NET Core 2.0 Use IHostedService to run background tasks in ASP.NET Core apps

I’ve had chance to play around with ASP.NET Core 2.0 preview 2 a little in the last few weeks. One of the things I was keen to try out and to understand a little better was the new IHostedService interface provided by Microsoft.Extensions.Hosting. David Fowler and Damian Edwards demonstrated an early example of how to implement this interface using the preview 1 of ASP.NET Core 2.0 at NDC Oslo. At the time of that demo the methods were synchronous but since then they have been made asynchronous.

Full disclosure: After taking a first pass at creating something using this interface I ran the code past David Fowler and he kindly reviewed it. As I suspected, I was not using it correctly! Since then David was kind enough to answer a few questions and even provided a sample of a base class that simplifies creation of hosted services. After my failure to understand the expected implementation of the interface and a general realisation that I needed to learn more about using Task cancellations with async/await, I almost decided to ditch my plan to write this blog post. However, I realised that this is still probably a good sample to share since others may run into the same mistakes I did. After speaking with David I believe this is appropriate use of the interface.

One of the challenges when starting out was trying to use something in preview that had no samples or documentation yet. While I hope no one will use this prior to RTM of 2.0 when I expect full documentation will be made available, I’m sure people may benefit from taking a look at it sooner. David did say that they have intentions to provide a formal base class, much like the code that he provided for me which will make creating these background hosted services easier. However, that won’t make it into the 2.0 release. The code I include in this sample might act as a good starting point until then, although it’s not fully tested.

Remember; there are no docs for this interface currently, so I’m taking a best guess at how it can be used and how it’s working based on what I’ve explored and been able to learn from David. This feature is preview and may also change before release (although very unlikely as 2.0 is nearly baked now). If you’re reading this in the future (and unless you’re a time traveller, you must be) please keep in mind this may be outdated.

Hosted Services

The first question to answer is what can we use this interface for? The basic idea is that it allows us to register background tasks, that run while our web host is running. These are co-ordinated with the lifetime of the application. We register a Task when the application starts and have the opportunity to do some graceful clean-up when the application is shutting down. While we could spin off work on a background thread previously, it would be killed when the main application process shutdown.

To create these background tasks we implement the new IHostedService interface.

The interface looks like this:

The idea is we register one or more implementations of this interface with the DI container, all of which will then be started and stopped along with the application by the HostedServiceExecutor. As users of this interface we are responsible for properly handling the cancellation and shutdown of our services when StopAsync is triggered by the host.

Creating a Hosted Service

One of the first possible use cases for these background tasks that I came up with was a scenario where we might want to update some content or data in our application from an external source, refreshing it periodically. Rather than doing that update in the main request thread, we can offload it to a background task.

In this simple example I provide an API endpoint which returns a random string value provided from an external service and updated every 5 seconds.

The first part of the code is a provider class which will hold the string value and which includes an update method that when called, will get a new string from the external service.

I then use this provider from my controller to return the string.

The main work for the setup of our service lives in an abstract base class called HostedService. This is the base class that David Fowler kindly put together. The code looks like this:

The comments in the class describe the flow. When using this base class we simply need to implement the ExecuteAsync method.

When the StartAsync method is called by the HostedServiceExecutor a CancellationTokenSource is created and linked to the token which was passed into the StartAsync method. This CancellationTokenSource is stored in a private field.

ExecuteAsync, an abstract method is then called, passing in the token from our CancellationTokenSource and the returned Task itself is stored. The StartAsync then must return as complete to the caller. A check is made in case our ExecuteAsync method is already completed. If not we return a Task.CompletedTask.

At this point we have a background task running whatever code we placed inside our implementation of ExecuteAsync. Our WebHost will go about its business of serving requests.

The other method defined by IHostedService is StopAsync. This method is called when the WebHost is shutting down. This is the key differentiator from running work on a traditional background thread. Since we have a proper hook into the shutdown of the host we can handle the proper shutdown of our workload on the background thread.

In the base class, first a check is made to ensure that StartAsync was previously called and that we actually have an executing task. Then we signal cancellation on our CancellationTokenSource. We await the completion of one of two things. The preferred option is that our Task which should be adhering to the cancellation token we passed to it, completes. The fall back is the Task.Delay(-1, cancellationToken) task completes. This takes in the cancellation token passed by the HostedServiceExecutor, which in turn is provided by the StopAsync method of the WebHost. By default this will by a token set with a 5 second timeout, although this timeout value can be configured when building our WebHost using the UseShutdownTimeout extension on the IWebHostBuilder. This means that our service is expected to cancel within 5 seconds otherwise it will be more abruptly killed.

To use this base class I then created a class inheriting from it called DataRefreshService. It is within this class that I implement the ExecuteAsync abstract method from HostedService. The ExcecuteAsync method accepts a cancellation token and looks like this:

I call the UpdateString method on the RandomStringProvider and then wait for 5 seconds before repeating. This all happens inside a while loop which continues indefinitely, until cancelation has been requested for the cancellation token. We pass the cancellation token down into the other async methods as well, so that they too can cancel their tasks all the way down the chain.

The final part of this is wiring up the dependency injection. Within the configure services method of Startup I must register the hosted service. I also register my RandomStringProvider class too, since that is passed into things via DI.

Summary

The IHostedService interface provides a nice way to properly start background work in a web application. It’s a feature we should not overuse, as I doubt it’s intended for spinning up large numbers of tasks, but for some scenarios it offers a nice solution. Its main benefit is the chance to perform proper cancellation and shutdown of our background tasks when the host itself is shutting down.

A special thanks to the amazing David Fowler for a quick code review (and correction) of my DataRefreshService. It was very kind of him to spare me some of his time to help me better understand this new feature. I hope I’ve explained everything correctly so that others can benefit from what he shared with me.

Docker for .NET Developers Header

Docker for .NET Developers (Part 7) Setting up Amazon EC2 Container Registry

In the previous part of this series I have discussed the reasons behind our decision to use Docker to run our product in production, using it to build and deploy images onto Amazon ECS. The next parts of this series will focus in on how we have achieved that and demonstrate how you can set up a similar pipeline for your systems. We’ll look at how we run our builds inside Docker, via Jenkins and how we deploy those images into AWS.

I’ll dive into some AWS ECS topics in these upcoming posts, discussing what it is and what it does in more detail. For now I want to start with one related part of ECS called the Amazon EC2 Container Registry (ECR). The reason I wanted to start with this is that I will soon be demonstrating our build process and a key part of that is pushing our images up to a registry.

What is a registry?

A container registry is ultimately just a store for images. For .NET developers, it’s quite a similar concept to Nuget. In the same way Nuget hosts packages, a registry hosts Docker images. As with Nuget, you can use a private registry, share your image publicly via a public registry or even host a local registry on your machine (inside Docker of course). Images inside a registry can be tagged so you can support a versioning system for your images. Microsoft use this technique for their aspnetcore images. While they share the same name, there are variants of the images tagged with different versions to indicate the version of the SDK or runtime they include.

The main reason we need a registry is to allow other people or systems to access and pull our images. They’d not be much use otherwise! For our system we started with a private registry running on Amazon ECR. The main reason we chose ECR was the fact that we would also be using AWS ECS to host our production system. ECR is a managed container registry which allows storing, managing and deploying Docker images.

Creating an EC2 Container Registry via the Console

In this post I want to share the steps I used to setup a container repository in ECR. It’s a quite straightforward process and takes only a few minutes. If you want to follow along you can sign up for AWS and try out the steps. Currently ECR is free for the first 500MB of images stored in it so there should be no cost in following along.

AWS ECR is a subset of the main ECS service so appears as repositories on the ECS menu. A Docker registry is the service that stores images, a repository refers to a collection of Docker images sharing the same name. If you’ve not use the service before you can click Get Started to create a new repository.

AWS ECR Getting Started 1

You are guided through the process, following a setup wizard. The first thing you must provide is a name for the repository to help identify it. You can optionally include a namespace before the repository name which would allow you to have more control over the naming. For example, you may choose to have a dev/dockerdemo repository and a prod/dockerdemo repository.

AWS will generate a unique URI for the repository which will be used when tagging images you want to push to it and for pulling images from it. As per the screenshot, permissions will be setup for you as the owner. You can provide more granular access to a registry later on. For example, your build server will need to use an account with push permissions, but developers may only need pull permissions.

AWS ECR Getting Started 2

After clicking Next Step your registry will be created. At the next screen you will be shown sample commands you can use to deploy images into the registry. I won’t go into those for now since we’ll look at them properly when we explore our Jenkins build script.

 AWS ECR Getting Started 3

Continuing on, you are taken into the repository which will currently be empty. Once you start pushing images to this registry you will see details of the images appear.

AWS ECR Getting Started 4

At this point you have an empty repository that can be used to store Docker images by pushing them to it. We’ll look at the process in future posts.

Using the CLI

While you can use the console to add repositories, you can also use the AWS CLI as well. You will first need to install the AWS CLI and configure it, providing a suitable access key and secret. Those will need to belong to a user with a suitable permissions in AWS to create repositories. For my example I have a user assigned to AmazonEC2ContainerRegistryFullAccess.
We can now run the following AWS command to create a new repository called “test-from-cli”

aws ecr create-repository --repository-name test-from-cli

If this succeeds you should see output similar to below

{

    "repository": {

        "registryId": "865288682694",

        "repositoryName": "test-from-cli",

        "repositoryArn": "arn:aws:ecr:eu-west-2:865288682694:repository/test-from-cli",

        "createdAt": 1499798501.0,

        "repositoryUri": "865288682694.dkr.ecr.eu-west-2.amazonaws.com/test-from-cli"

    }

}

Summary

In this post we’ve explored how we can easily use the AWS console and CLI to prepare a repository inside the Amazon Container Registry. This will enable us to push up our images that we will later use within the container service to start up services. 

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – This post

Announcing .NET South East A new Brighton based .NET User Group

It’s been an exciting few weeks for me recently. First I was accepted to talk at two conferences in September, then our latest product at work went live, then I got a promotion at work and now I’ve decided to start a new .NET user group in Brighton which is call .NET South East.

Brighton based .NET South East user group logo

The idea of starting a meetup has been at the back of my mind for a little while now and after much consideration I decided that I should just go ahead and get on with it. I’ve setup a new group on meetup.com called .NET South East. I expect it will mostly be attended by developers living and working in Brighton but I’m hoping that we can encourage people to join from anywhere around Sussex.

Announcing the First Meetup

I’m very excited to be able to announce that the first meetup will be held on August 22nd. At that event I’ll be talking about Docker for .NET Developers. In this talk I will take you on a tour of Docker, a modern application packaging and containerisation technology that .NET developers can now leverage. I will share with you the Docker journey that our team at Madgex are on, exploring our motivations for using Docker. You will learn the core terminology .NET developers need to know to begin working with Docker and explore demos that show how you can start using Docker with your own ASP.NET Core projects. Finally, I will demonstrate how we have built a deployment pipeline using Jenkins and explore the AWS EC2 Container Services (ECS) configuration we have created to enable rapid, continuous delivery of our microservices.

Elmah.io have kindly provided sponsorship for this event in the form of a 6 month business license for their software. We will be holding a raffle at the end of the event for one lucky attendee to win this fantastic prize.

Why a User Group?

User groups are a place where like-minded people can come together to enjoy a common interest, sharing and learning about that interest together. I’ve attended a few general developer user group sessions and watched many more online and I always leave having learned something or with a take-away I could follow up on later. Even if it’s just the seed of an idea or something I’d like to try, it has been well worth my time. Along with the content from the speakers, it’s also a good chance to mix in with other developers and make contacts, share thoughts and ideas. Perhaps you’ll meet someone who can help with a problem you’ve been fighting recently!

I started working in Brighton nearly two years ago and since then I’ve kept an eye out for groups and talks to attend. The only .NET specific group I’ve found locally is Brighton ALT.NET which meets once a month to have open discussion about any topics that the attendees vote to talk about. It’s a great format and there’s a nice variation of topics and opinions from the community there. I’ve attended on a couple of occasions and plan to get along to more of their monthly events.

Some may wonder, why start a group if one already exists and it’s a fair question. What I’m proposing to introduce takes a different format to that of ALT.NET. I’m looking to bring in speakers from around the area, as well as hopefully further afield, giving them the chance to share a topic in depth with the audience. In many cases I expect the talks to be conference length, 45-60 minutes long although I’m sure we can accommodate shorter talks as well.

Recently I met up with Mike who organises the ALT.NET evenings to run the idea past him. I was conscious that he already has a good community of regular attendees and I didn’t want to upset the balance by trying to introduce this second group. Mike was very encouraging of the idea and agreed that he felt there was room for both groups to exist and thrive together, helping to strengthen the local .NET community.

I recently watched a very inspiring talk from Ian Cooper at NDC Oslo entitled, The .NET Renaissance. In that talk Ian highlights the historical decline of C# and .NET. Ian ended that talk with a call to action to everyone in our community to help create a renaissance of .NET. Together, we can do it and bring the change. It’s an pivotal time for .NET developers with the new .NET Core framework and the approach from Microsoft to embrace open source and community. Later this year, version 2.0 of .NET core will be released and at that point porting over older .NET framework projects should be even easier. I’m very much enjoying working with the new framework and sharing my experience in this blog and now at soon at some meetups and conferences. I’m excited to play my small part in helping move the #dotnetrenaissance forward. Please join us!

What’s Next?

I’m still finding my feet as I establish this new group and start planning the events. I’m working on the logistics of the arrangements that need to be in place. My employer Madgex have very kindly agreed to allow me to use their meeting room space for the meetups. We have three meeting rooms that can be opened up into one large area, with A/V equipment and seating available. Perfect for our needs! Located close to the centre of Brighton, the Madgex office should be in easy reach of developers wanting to participate.

Madgex have also kindly provided me funding to setup the meetup.com group so that I could start to gauge interest in starting a new group. Already I’ve had over 40 signups from people interested in the idea and I hope that many of those will be able to attend the meetups going forward.

Finding speakers was my main worry, but already I’ve been approached by a few people who have talks they can offer to present. I expect there are other potential speakers out there with content to share, but perhaps no outlet for it. If you’d like to come along and speak please do get in touch.

I’m still trying to decide what the best schedule for the meetups. Ideally I’d like to run them every month and about two weeks after the local ALT.NET meetup. To begin with I’m planning on every two months as we build up the interest and I make arrangements with enough speakers who can present at the meetups. We’ll judge this on interest and the logistics or organising everything.

Call for Attendees

I’d love to get as many developers from our community involved in the meetups and attending regularly. I really believe that they will be a great chance to learn about topics that are necessary for .NET developers to thrive. Let’s get together and share our passion for what we do. I do urge you to save the date and RSVP on meetup.com. Please do spread the word with friends and colleagues who may want to attend.

Call for Speakers

I’d love to hear from you if you have a talk you want to present. It would be great to hear from the many local developers we have in Brighton, sharing what they do and teaching others about technologies they are using. If you’re further afield, but able to travel, we’d love to have you. I’d love to welcome first time speakers to join us as well. I’ve only just begun speaking myself and I’m finding it to be a great experience that is teaching me a lot along the way. I’ve never been a confident public speaker, but have found that by diving in, I’m able to deal with that fear and share my passion. Please do get in touch and I’ll help in any way I can.

Call for Sponsors

We already have two fantastic sponsors on-board, Madgex are providing their meeting space for free and assisting with some of the costs to get the event up and running. Elmah.io are providing a license as a prize for one attendee to win. If you’re a company in a position to offer prizes or sponsorship to our new group to help us get off the ground, please do get in touch.

Conclusion

I’m excited to get started to try to do my part to help build on the .NET community here in Brighton. I’m learning as I go and developing my own skills to organise the meetup and network with peers. I’d like to offer a huge thanks to those who have helped me so far. I’ve had great support from other event organisers (Dan Clarke, Joe Woodward, Dylan Beattie, Derek Comartin), community members via Twitter, Madgex and the staff there and elmah.io. Thanks to Mike from ALT.NET for his support and input and a special thanks to Ben Wood, a talented designer at Madgex who is kindly helping to develop a brand identity and digital assets for the new group.

Speaking at Progressive .NET Tutorials 2017

Back in early May I was listening to a .NET Rocks episode when Carl and Richard mention a conference called Progressive .NET Tutorials that they would be attending in London. I’m a big believer in attending conferences as I find that it’s to be a great way to learn new skills and network with similarly minded people. I headed over to the Progressive .NET website to check out the details about the event.

The full line-up was not complete when I made that first visit, but the keynote from Jon Galloway had me sold already. Then a section lower down the page caught my eye and there began the events leading to me speaking at my first conference! I’d noticed the call for papers signup form and having already been thinking about opportunities to do more technical speaking, I decided that I would submit a talk; in fact I ended up submitting two potential talks. After submitting, the wait began until after the closing deadline passed and the organisers began selecting speakers. Something that is great about Progressive .NET is that were welcoming to first time speakers as well as experienced ones.

About 3 weeks ago I received an email from Nicole at Skills Matter and was delighted to read that one of my talks had been selected for inclusion in the 2017 programme. I’ll be presenting a talk entitled, Docker for .NET Developers. In this talk I’ll be sharing with the audience a journey that our team at Madgex have been on; using Docker as we develop a new product. This has been our first experience with Docker and along the way we’ve learned a lot. We are now using Docker not only to simplify the developer workflow, but also for our build and deployment process. We are leveraging Amazon EC2 Container Services to host our production system which is built using .NET Core based microservices.

I’m really excited about the content I’m building for this talk. I’ll be sharing the things we’ve learnt about Docker in our experience so far. I’ll take a look at what Docker is, what problems it has helped us solve and how we implemented it in our developer and deployment workflows. Along the way we’ll take a look at some demos of what this looks like in code, exploring dockerfiles, docker-compose and our build and deployment process. I hope this talk will be useful to developers looking to begin using Docker in their own projects. We’ll start with the basics but by the end of the session I hope to show an example of how we do a build and live deploy, with zero downtime into AWS. It’s a very exciting time to be a .NET developer and the release of the cross-platform .NET Core framework has opened the door to this exciting new application platform. I’m excited to see where the new framework takes us in the next 12 months as we see the release of .NET Core 2.0. Architectures and strategies around microservices and serverless are also creating some interesting new ways for developers to think about building new applications.

Over the years I’ve learned a lot from the .NET community, reading blog posts, listening to podcasts and watching videos. I have a passion for learning new things and investigating new techniques. Watching talks from many great speakers has influenced my personal growth, shown me new ideas and helped me develop my technical skills. I’m looking forward to the chance to begin participating more directly by talking about some of the things we’ve learned about Docker and Amazon ECS. I will be giving it my all to present a fast paced, information packed talk about Docker and hopefully help other developers to begin using it. I’m extremely passionate about what I do and the chance to share that with an audience of peers at this event is very exciting.

I’m extremely honoured and privileged to be joining such an amazing line-up of world class experts at this event. Many of the of speakers are people whom I follow and who share great content in their areas of expertise. It’ll be exciting to attend some of their sessions and learn more about all of the other great topics being discussed at this event.

I look forward to the opportunity to meet some fellow developers at the conference. If you’re free on September 13th – 15th then I do recommend that you take a look at the website and buy yourself a ticket. I hope to see some of you there, please do come and say hi!

Docker for .NET Developers Header

Docker for .NET Developers (Part 6) Using Docker for Build and Continuous Deployment

In part 3 I discussed one of the first motivations which led our team to begin using Docker. That motivation was focused on making the workflow for our front end developers quicker and simpler. In this post I want to explore a second motivation which led to us fully embrace Docker on our project. Just like part 3, this post doesn’t have any code samples; sorry! Instead I want to share the thought process and concepts from the next phase of our journey without going too technical. I believe this will give the next parts in this series a better context.

Why deploy with Docker?

Once we had Docker in place and we were reaping the benefits locally, we started to think about the options we might have to use Docker further along our development lifecycle, specifically for build and deployment.

Our current job board platform follows continuous delivery. Every time a developer checks in, the new code is picked up by the build system and a build is triggered. After a few minutes the code will have been built and all tests run. This helps to validate that the changes have not broken any existing functionality.

Deployments are then managed via Octopus deploy which will take the built code and deploy it onto the various environments we have. Code will be deployed onto staging and within that environment our developers have a chance to do some final checking that the new functionality is working as expected. Our testing team have the opportunity to run regression testing against the site to validate that no functionality has been broken. Once the testing is complete, the code is triggered for deployment onto our production environment. This is a manual, gated step which prevents code releasing without a developer or developers validating it first.

That flow looks like this:

Existing Continuous Delivery Flow

With our new project we agreed that ideally we wanted to get to a continuous deployment flow, where code is checked in, tested and deployed straight to live. That sounds risky I know and was something we weighed up carefully. A requirement of this approach is that we can fail fast and rapidly deploy a fix or even switch back to a prior version should the situation require it (we can get a fix to live in about ~5 minutes). By building in smaller discrete microservices we knew we would be reducing the complexity of each part of the system and could more easily test them. We are still working out some additional checks and controls that we expect to implement to further help prevent errors slipping out to live.

At the moment this involves many unit tests and some integration tests within the solutions using the TestHost and TestServer which are part of ASP.NET Core. I’m starting to think about how we could leverage Docker in our build pipeline to layer in additional integration testing across a larger part of the system. In principle we could spin up a part of the system automatically and then trigger endpoints to validate that we get the expected response. This goes a step further than the current testing as it tests a set of components working together, rather than in isolation.

One of the advantages that Docker provides is simplified and consistent deployments. With Docker, your create your images and these then become your unit of deployment. You can push your images to a container registry and then deploy that image to any Docker host you want. Because your image contains your application and all of its dependencies you can be confident that once deployed, your application will behave in the same way as it did locally.

Also, by using Docker to run your applications and services, you no longer need to maintain dependencies on your production hosts. As long at the hosts are running Docker, there are no other dependencies to install. This also avoids conflicts arising between applications running on the same host system. In a traditional architecture, if a team wants to deploy an application on the same host, requiring a newer version of a shared dependency, you may not be able to upgrade the host without introducing risk to the existing application.

Using Docker for Builds

In prior posts we’ve seen that we can use the aspnetcore-build image from Microsoft to perform builds of our source code into a final DLL. This opens the door to standardise the build process as well. We now use this flow for our builds, with our Jenkins build server being used purely to trigger the builds inside Docker. This brings similar benefits as I described for the production hosts. The build server does not need to have the ASP.NET Core SDK installed and maintained. Instead, we just need Docker and then can use appropriate build images to start our builds on top of all of the required dependencies. Using this approach we can benefit from reliable repeatability. We don’t have to worry about an upgrade on the build server changing how a build behaves. We can build applications that are targeting different ASP.NET Core versions by basing them on a build image that contains the correct SDK version.

Some may raise a question over what the difference is between Docker and Octopus or Docker vs Jenkins. They all have overlapping concerns but Docker allows us to combine the build process and deployment process using a single technology. Jenkins in our system triggers builds inside Docker images and we then ship the built image up to a private container registry (we use Amazon ECR which I’ll look at soon).

Octopus is a deployment tool, it expects to take built components and then handles shipping them and any required configuration onto deployment targets. With Docker, we ship the complete application, including dependencies and configuration inside the immutable Docker image. These images can be pulled and re-used on any host as required.

Why Jenkins?

In our case there was no particular driver to use Jenkins. We already had access to Jenkins running on a Linux VM within our internal network and saw no reason to try out a new build server. We asked our systems team to install Docker and we then had everything we needed to use this box to trigger builds. In future posts I’ll demonstrate our build scripts and process. I’m sure that most of the steps will translate to many other common build systems.

Hosting with AWS

A final decision that we had to make was around how we would host Docker in production. At the time our project began we were already completing a migration of all of our services into AWS. As a result, it was clear that our final solution would be AWS based. We had a look at the options and found that AWS offered a container service which is called Amazon ECS.

The options for orchestrating Docker are a little daunting, and at this time I haven’t explored alternative solutions such as DC/OS or Kubernetes. I’ve not personally explored them at this stage. Like Amazon ECS they are container orchestration services that schedule containers to run and maintain the required state of the system. They include things like container discovery to allow us to address and access the services we need. Amazon ECS is a managed service that abstracts away some of the complexities of setting these systems up and managing them. However, this abstraction comes with the cost of some flexibility.

With AWS ECS we can define tasks to represent the components of our system and then create services which maintain a desired count of containers running those tasks. Our production system is now running on ECS and the various components are able to scale to triggers such as queue length, CPU load and request volumes. In future posts I’ll dive into the details of how we’ve set up ECS. We now have created a zero downtime deployment process taking advantage of the features of ECS to start new version of containers, switching the load over when they are ready to handle requests.

Our current Docker based deployment flow looks like this:

Flow of a docker build

Developers commit into git locally and push to our internally hosted git server. Jenkins picks up on changes to the repository using the GitHub hook and triggers a build. We use a script to define the steps Jenkins will use, the resulting output is a Docker image. Jenkins pushes this image up to our private registry which is running in AWS on their EC2 Container Registry (ECR). Finally, Jenkins triggers an update on the Amazon container service to trigger starting new container instances. Once those instances are successfully started and passing the Application Load Balancer health checks, connections to the prior version of the containers are drained and those containers stopped and removed. We’ll explore the individual elements of this flow in greater depth in later blog posts.

Summary

In this post we have looked at a secondary motivation for using Docker in our latest project. We explored at a high level the deployment flow and looked at some of the practical advantages we can realise by using Docker images through the entire development and deployment lifecycle. We are still refining our approaches as we learn more but we found it fairly simple to get up to speed using Jenkins as a build system, via Docker. In the next set of posts I’ll dive into how we’ve setup that build process, looking at the scripts we use and the optimised images we generate to help improve start up time of containers and reduce the size of the images.

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – This post
Part 7 – Setting up Amazon EC2 Container Registry