Docker for .NET Developers Header

Docker for .NET Developers (Part 4) Working with docker-compose and multiple ASP.NET Core microservices

In Part 3 we looked at one of the motivations behind using Docker with ASP.NET Core to enable simpler developer processes. In this post we’ll them look at using docker-compose files to define and run multi-container systems. I will take a look at how we can build a sample application, containing two services that are spun up at the same time with a single command.

Working with docker-compose

This will be a simplified example of what we are doing with our front end teams on our project. To keep it easy to work with, I’ll include everything in the same solution and the focus will be mostly on the docker-compose format and how using docker-compose can simplify starting and managing multi-container systems. If you want to try out the code above for yourself I have uploaded the source from this post to GitHub.

This solution contains two ASP.NET Core API projects. One simulating a back end API service and another simulating a front end API service. This is a simplified view on what we have in our real system. Our front end developers need to call to the front end API service in order to gather some data to expose on the UI. In this example we can pretend that the back end API is providing some data to the front end API, which it will enrich and reformat in order to provide final output.

In our real system, we have each service as a separate repository and Visual Studio solution. We have a specialised query API which interacts with ElasticSearch and provides this data for consumption by multiple services. One of those services is our report API which translates the returned data into a model suitable for use by the front end. In our first iteration of the development flow we have our front end developers pulling the latest source for each of the APIs systems but currently we are reworking that to produce a more optimal flow. However, it’s still very useful to know how to use docker-compose to start up and maintain multiple, related Docker containers.

If you take a look in each of the projects you’ll see I’ve included a dockerfile in the root. These dockerfiles are very much like the one I demonstrated in detail in part 2 of this series. They use the aspnetcore-build Docker image to build and then run our application code. Again, this is not exactly how you will want to end up building your images for deployment in the real-world, but it works for this example.

The docker-compose file structure

If you take a look in the root of the solution, I’ve included a docker-compose.yml file.

Docker compose as described in the Docker documentation is a command line tool which allows us to define and run multi-container Docker applications. It’s a yaml based file where you can define one or more services. Each service will be started as a container and using this file you can provide a practical way for someone to start up a set of related containers, with defined dependencies between each component.

If we take a look at the docker-compose file in my sample it looks like this:

First we specify a version for the docker-compose file. I’m using a fairly recent 3.2 version. This simply defines the features available in the docker-compose file.

It then has a services node. It’s in here that we will create a definition for each service in our system. The first one I define is a backend-service. The build element allows me to define the context. This is the path to the directory where the dockerfile resides for that service. In my case I navigate to the path of the back end project, which is where I located the dockerfile for that API.

In this docker-compose file I provide an explicit name for this container that will be used when starting it up. That’s all there is for the back end service.

The next service I define is the front end service. This looks similar to the back end one, but includes some extra properties. The first is the ports option. This allows me to expose ports from the container out to the host which runs it. As this is the front end API and we expect to be able to call it via the host, we must provide a publicly exposed port on the host machine, otherwise we would not be able to access it. We define this with host port (the port to use on the host) and then the container port (the internal port the application runs on inside the container). These are separated with a colon.

You may be wondering why we didn’t do this on the back end service. The front end service needs to talk to that in order to retrieve data, so how is that working? This is where some magic happens. Docker provides a DNS system used to locate containers and docker compose sets up a Docker network for your defined services. All containers that are part of the docker-compose services definition are reachable by other containers on that network. This means that we don’t have to explicitly expose the back end port for the containers to communicate. As we’re not expecting to allow external systems to call the back end API it’s best to avoid exposing it’s port on the host. I can limit my exposure to only the single front end port that I need to expose.

The next configuration option I define is the environment section. This allows us to set environment variables inside the container that will be started using this docker-compose file. On my front end service I used the standard ASP.NET Core configuration code to define a setting for the path to the back end service. This is required as this will be different on development and production. The ASP.NET Core configuration is layered, and although it will have loaded a value for the back end setting from the appsettings.json file, it can be overridden using an environment variable. This powerful configuration structure compliments Docker very nicely since we can always specify the required environment variables for each running container.

You’ll note I’ve used http://backend-service:80 for the value of the back end endpoint setting. How does that work? Remember that I mentioned that Docker has its own DNS and that docker-compose creates a network for the set of services. This allows me to address the other containers using the name I gave the service. Note that isn’t the container name, it’s the name from the first element of the service. The Docker DNS system will resolve this name between the containers. I use the port that the service runs on inside the container which by default will be port 80 for the ASP.NET Core API.

The final new element I’ve included is the depends_on section. This allows me to define a child/parent dependency for the containers. Here I state that the front end service depends on the back end service. It will be started after the back end is running. On our project we use this to start up a Postgres database as part of our chain of services. Our front end APIs depend on the database being started first in order to startup and seed their data. That’s all we need to do to define our basic docker-compose file.

docker-compose Commands

Now that we have our docker-compose file we can use it to start up our containers. We can do this by dropping into a command line at the path where our docker-compose file is defined. There are a couple of command we can now use. I’ll start with:

docker-compose up -d

This will inspect the docker-compose file in the root of the path where the command is run. It will build and start up a container for each of the defined services. The -d argument specifies that the containers start in detached mode and run in the background. If you choose to leave this off you will see a combined streamed output from the console for each of the containers. If there are no images available for those containers, they will be built automatically.

This single command provides a really handy way to start up multi-container systems. While my example is quite simple, having only two services, our real world front end docker-compose file contains 4 API based services, one Postgres database container and sometimes even an ElasticSearch container. Each of those can be built and run manually using the individual Docker commands against their respective dockerfile, but it’s much nicer for our front end developers to use docker-compose and a single command.

I mentioned there were other commands we can use. While docker-compose up will build images for us if they do not already exist, once images exist, it will keep using those images. If we make changes to the source files and expect new images to be built we can use docker-compose build to explicitly build the new images. This will only build them, and will not start any containers. You can also do a build and up in a single command using docker-compose up --build. This will build new images and then start all of the containers.

I also want to draw attention to the --no-cache option as well. This will force no caching of the image layers to will recreate each layer again. I’ve found this useful when testing a docker-compose file change to ensure all steps are working as expected.

Once containers are running you might wonder how we can stop them. You can use docker-compose stop to stop all of the containers defined in the docker-compose file. To cleanup you can use docker-compose rm to remove the stopped containers. There is also docker-compose down if you would like to stop and then remove the containers using a single command.

Summary

In this blog post I have described a real-world scenario that we encountered and which I hope demonstrates a benefit of using Docker during development. By including Docker in our developer flow we have negated the need for front end developers to use a Windows VM and to manually be responsible for using an IDE such as Visual Studio to build our .NET code. Builds are quicker and there are very few dependencies required to get a new front end developer on-boarded to our project. We have no complex “setup your developer machine” documents containing specific versions of application, registry settings and IIS configuration.

In particular we focused on how this is useful for multiple microservice based architectures where often a single working system may be built from many smaller services. We used docker-compose to define our set of services and using a single command we’re able to start the components on a developer machine. Using this approach we can define and co-ordinate the elements making up the system so that front end developers can run the environment on their devices with ease.

Finally we explored the structure of a simple docker-compose file, looking at how we can define the services, set container environment variables and expose ports.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – This post
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 3) Why we started using Docker with ASP.NET Core

In the prior two posts in this series we took a look at the main Docker terminology and looked at creating a basic dockerfile to define a Docker image containing an ASP.NET Core API application. In this post I want to switch it up a bit and spend a little time sharing a specific practical reason that led our team to start using Docker in our development flow. This was our first step in the Docker journey that we now find ourselves on and expands on the summary I introduced in part 1.

Why we started using Docker

Warning! This post has no code examples. Instead what I’ll be describing is a real-world situation that led us to start using Docker for our new project. I want to highlight why Docker was a good choice given our requirements and hopefully you’ll find cases in your work where it may be relevant to consider Docker yourself. I think it’s useful to share this story, while there’s a lot of excitement about Docker and some clear benefits, it’s useful to reflect on a practical advantage it brought us with very little effort. This is simply the start of the journey and I will be expanding on how our use evolved as the series continues. I hope you find this part useful, but don’t worry, I won’t be offended if you want to skip onto the docker-compose examples in part 4! See, I even provided a link!

As I explained in the first post, our new greenfield project at work kicked off in 2016. We needed to build a system to provide data analytics over a very large data set of event based data. I won’t cover the architecture of the back end components at this stage. For this post it’s the front end where I want to demonstrate the first benefit we were able to gain from using Docker.

At work, our front end and back end teams are split and work on their areas of expertise accordingly. On our current platforms, we have an in-house MVC framework which uses a custom templating language for the front end pages. When our front end developers need to perform work on the UI, they must pull down and update the main platform code on their devices in order to spin up the site locally. Given that this is a large platform, the size of the changes they need to pull could be fairly large. In order to work with the site, they must compile the code and run it locally, which, as this platform is built on the full .NET framework, requires Windows and IIS. Therefore, each developer, most of whom use a Mac, must have a Windows VM installed, or access to a Remote VM in order to work with the code.

Over time this flow has evolved to make the process as efficient as possible but it’s still not an ideal solution. With this new project we had the opportunity to try a new more modern approach. Given that the UI for this analytics style application was very interactive, we decided that it made sense to develop a SPA style front end. This had the benefit that it enabled the front end developers to choose a technology that suited them. After a bit of research they landed on Vue.js.

In order to support the front end SPA, we agreed that we would provide a number of REST API services, each providing a specific, bounded functionality. We could have chosen to build one larger, all-encompassing API application, but decided that the smaller APIs made scaling each component much easier. Scaling would be dependent on their own unique load and allow us to better use the resources of the server. It also means we can keep the code separated and hopefully easier to maintain. We can change each component independently and work more rapidly.

We ended up with a design that included 3 web facing API services, each backed by its own database. We have a user API which handles authentication and authorisation for the application, as well as other user management services. We have a report API that enables the creation and running of reports and we have a schedule API used for creating scheduled reports and managing downloads. Finally, we have a back end API which is responsible for querying over ElasticSearch, processing the data and returning the resulting data. This query API is used by both the reporting API and our offline back end scheduled report service.

Here is a diagram showing the main components our of front end services architecture:

Docker microservice architecture

At the time we were creating our architecture, ASP.NET Core was in RC and we took a brave but in the long run, good decision to use the new framework to develop our APIs. Not only were we interested in the performance and framework improvements, but we also knew we could begin to take advantage of the cross platform nature of .NET Core to allow us to look at different hosting options. Having recently started moving our systems into AWS we initially were considering the possibility of using Linux VMs to host the APIs.

As we pursued this path, another benefit of this decision came to light. Now that we were cross platform and planning to target Linux for hosting, we realised we could start to look at Docker and containerisation as part of our workflow. This opened up some new possibilities and after talking through the concept we realised that we could make development for the front end team much easier by providing Docker images to run the API code. The big change this presented is that they would no longer need to run Windows in order to build the UI for the platform and they would no longer need Visual Studio in order to build the source code.

A downside of splitting things into smaller microservices is that in order for a system to function, you often need multiple things running together at the same time. In our case, each of the 4 main API services needed to be running for the front end to be able to fully function. Each of those also needed a database server, we had chosen Postgres, to support the data storage requirements. Spinning up all of these parts needed a little coordination.

When we started looking at Docker we realised that we could solve this problem using docker-compose files. Docker-compose provides a simple way to start multiple containers with a single command. In our case, each service is a separate Visual Studio solution and repository. This allows us to develop, build and deploy each part of the system individually. As long as the public facing API endpoints do not change then we can update the internals of each API with no impact on the other parts of the system. In each solution we include a dockerfile to describe a Docker image that that will run the API service. These dockerfiles initially started very much like the example I showed in part 2 of this series. We have since gone on to optimise the files and I plan to explore that in a future post.

In addition to the dockerfiles we also provide a docker-compose file which, with a single command, can be used to build all of the required images. With a second command we can start all of the containers needed to support the front end. Our first iteration of the front end workflow relied on the front end developers pulling the latest source from each repository. They could then use the docker-compose build command which triggered image creation. Since the build is happening inside the Docker containers, they do not need to run Windows or even have the .NET SDK installed on their Macs. As part of the solution, we also utilised a public Docker image for Postgres as one of the services defined within our docker-compose file. This means that the front end team do not need to install Postgres on their own device either. Other than Docker, there are no dependencies to run the back end services required to develop the front end website. With these steps we had removed a barrier for the front end team and very much simplified our development process.

In the second iteration that we are now working on for the front end workflow we are providing the front end team with a new docker-compose file which pulls images from a private container registry running in AWS ECR. With this change there is no need for the front end developers to ever pull or update the source for the components they need. Instead, the compose file will pull the latest available images from the registry and have them up and running extremely quickly. We’re still investigating and testing how we want to finalise this part of our development workflow so it’s something I’ll share in a future post.

The same process proved really useful for QA and testing as well. Anyone involved with testing the system could pull the components down and run the full system in isolation on their machine. The front end website is also containerised and in this case we even used an ElasticSearch Docker image to allow a full end to end system to be started and tested on a machine, with few external dependencies. The dependencies we could not containerise were things such as access to Amazon AWS SQS and S3 for example.

One by-product of leveraging Docker for the front and back end developer flow is that we can start to eliminate the “it worked on my machine” arguments. By loading everything inside Docker we get to a place, where everyone is working on an identical, repeatable environment. Because our image contains all of the dependencies, we can be sure we have matching versions and configuration every time it is run. We don’t have to ask developers to maintain correct versions of dependencies on their devices either.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – This post
Part 4 – Working with docker-compose and multiple ASP.NET Core microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry