Docker for .NET Developers Header

Docker for .NET Developers (Part 4) Working with docker-compose and multiple ASP.NET Core microservices

In Part 3 we looked at one of the motivations behind using Docker with ASP.NET Core to enable simpler developer processes. In this post we’ll them look at using docker-compose files to define and run multi-container systems. I will take a look at how we can build a sample application, containing two services that are spun up at the same time with a single command.

Working with docker-compose

This will be a simplified example of what we are doing with our front end teams on our project. To keep it easy to work with, I’ll include everything in the same solution and the focus will be mostly on the docker-compose format and how using docker-compose can simplify starting and managing multi-container systems. If you want to try out the code above for yourself I have uploaded the source from this post to GitHub.

This solution contains two ASP.NET Core API projects. One simulating a back end API service and another simulating a front end API service. This is a simplified view on what we have in our real system. Our front end developers need to call to the front end API service in order to gather some data to expose on the UI. In this example we can pretend that the back end API is providing some data to the front end API, which it will enrich and reformat in order to provide final output.

In our real system, we have each service as a separate repository and Visual Studio solution. We have a specialised query API which interacts with ElasticSearch and provides this data for consumption by multiple services. One of those services is our report API which translates the returned data into a model suitable for use by the front end. In our first iteration of the development flow we have our front end developers pulling the latest source for each of the APIs systems but currently we are reworking that to produce a more optimal flow. However, it’s still very useful to know how to use docker-compose to start up and maintain multiple, related Docker containers.

If you take a look in each of the projects you’ll see I’ve included a dockerfile in the root. These dockerfiles are very much like the one I demonstrated in detail in part 2 of this series. They use the aspnetcore-build Docker image to build and then run our application code. Again, this is not exactly how you will want to end up building your images for deployment in the real-world, but it works for this example.

The docker-compose file structure

If you take a look in the root of the solution, I’ve included a docker-compose.yml file.

Docker compose as described in the Docker documentation is a command line tool which allows us to define and run multi-container Docker applications. It’s a yaml based file where you can define one or more services. Each service will be started as a container and using this file you can provide a practical way for someone to start up a set of related containers, with defined dependencies between each component.

If we take a look at the docker-compose file in my sample it looks like this:

First we specify a version for the docker-compose file. I’m using a fairly recent 3.2 version. This simply defines the features available in the docker-compose file.

It then has a services node. It’s in here that we will create a definition for each service in our system. The first one I define is a backend-service. The build element allows me to define the context. This is the path to the directory where the dockerfile resides for that service. In my case I navigate to the path of the back end project, which is where I located the dockerfile for that API.

In this docker-compose file I provide an explicit name for this container that will be used when starting it up. That’s all there is for the back end service.

The next service I define is the front end service. This looks similar to the back end one, but includes some extra properties. The first is the ports option. This allows me to expose ports from the container out to the host which runs it. As this is the front end API and we expect to be able to call it via the host, we must provide a publicly exposed port on the host machine, otherwise we would not be able to access it. We define this with host port (the port to use on the host) and then the container port (the internal port the application runs on inside the container). These are separated with a colon.

You may be wondering why we didn’t do this on the back end service. The front end service needs to talk to that in order to retrieve data, so how is that working? This is where some magic happens. Docker provides a DNS system used to locate containers and docker compose sets up a Docker network for your defined services. All containers that are part of the docker-compose services definition are reachable by other containers on that network. This means that we don’t have to explicitly expose the back end port for the containers to communicate. As we’re not expecting to allow external systems to call the back end API it’s best to avoid exposing it’s port on the host. I can limit my exposure to only the single front end port that I need to expose.

The next configuration option I define is the environment section. This allows us to set environment variables inside the container that will be started using this docker-compose file. On my front end service I used the standard ASP.NET Core configuration code to define a setting for the path to the back end service. This is required as this will be different on development and production. The ASP.NET Core configuration is layered, and although it will have loaded a value for the back end setting from the appsettings.json file, it can be overridden using an environment variable. This powerful configuration structure compliments Docker very nicely since we can always specify the required environment variables for each running container.

You’ll note I’ve used http://backend-service:80 for the value of the back end endpoint setting. How does that work? Remember that I mentioned that Docker has its own DNS and that docker-compose creates a network for the set of services. This allows me to address the other containers using the name I gave the service. Note that isn’t the container name, it’s the name from the first element of the service. The Docker DNS system will resolve this name between the containers. I use the port that the service runs on inside the container which by default will be port 80 for the ASP.NET Core API.

The final new element I’ve included is the depends_on section. This allows me to define a child/parent dependency for the containers. Here I state that the front end service depends on the back end service. It will be started after the back end is running. On our project we use this to start up a Postgres database as part of our chain of services. Our front end APIs depend on the database being started first in order to startup and seed their data. That’s all we need to do to define our basic docker-compose file.

docker-compose Commands

Now that we have our docker-compose file we can use it to start up our containers. We can do this by dropping into a command line at the path where our docker-compose file is defined. There are a couple of command we can now use. I’ll start with:

docker-compose up -d

This will inspect the docker-compose file in the root of the path where the command is run. It will build and start up a container for each of the defined services. The -d argument specifies that the containers start in detached mode and run in the background. If you choose to leave this off you will see a combined streamed output from the console for each of the containers. If there are no images available for those containers, they will be built automatically.

This single command provides a really handy way to start up multi-container systems. While my example is quite simple, having only two services, our real world front end docker-compose file contains 4 API based services, one Postgres database container and sometimes even an ElasticSearch container. Each of those can be built and run manually using the individual Docker commands against their respective dockerfile, but it’s much nicer for our front end developers to use docker-compose and a single command.

I mentioned there were other commands we can use. While docker-compose up will build images for us if they do not already exist, once images exist, it will keep using those images. If we make changes to the source files and expect new images to be built we can use docker-compose build to explicitly build the new images. This will only build them, and will not start any containers. You can also do a build and up in a single command using docker-compose up --build. This will build new images and then start all of the containers.

I also want to draw attention to the --no-cache option as well. This will force no caching of the image layers to will recreate each layer again. I’ve found this useful when testing a docker-compose file change to ensure all steps are working as expected.

Once containers are running you might wonder how we can stop them. You can use docker-compose stop to stop all of the containers defined in the docker-compose file. To cleanup you can use docker-compose rm to remove the stopped containers. There is also docker-compose down if you would like to stop and then remove the containers using a single command.

Summary

In this blog post I have described a real-world scenario that we encountered and which I hope demonstrates a benefit of using Docker during development. By including Docker in our developer flow we have negated the need for front end developers to use a Windows VM and to manually be responsible for using an IDE such as Visual Studio to build our .NET code. Builds are quicker and there are very few dependencies required to get a new front end developer on-boarded to our project. We have no complex “setup your developer machine” documents containing specific versions of application, registry settings and IIS configuration.

In particular we focused on how this is useful for multiple microservice based architectures where often a single working system may be built from many smaller services. We used docker-compose to define our set of services and using a single command we’re able to start the components on a developer machine. Using this approach we can define and co-ordinate the elements making up the system so that front end developers can run the environment on their devices with ease.

Finally we explored the structure of a simple docker-compose file, looking at how we can define the services, set container environment variables and expose ports.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – This post
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry


Have you enjoyed this post and found it useful? If so, please consider supporting me:

Buy me a coffeeBuy me a coffee Donate with PayPal

Steve Gordon

Steve Gordon is a Pluralsight author, 7x Microsoft MVP, and a .NET engineer at Elastic where he maintains the .NET APM agent and related libraries. Steve is passionate about community and all things .NET related, having worked with ASP.NET for over 21 years. Steve enjoys sharing his knowledge through his blog, in videos and by presenting talks at user groups and conferences. Steve is excited to participate in the active .NET community and founded .NET South East, a .NET Meetup group based in Brighton. He enjoys contributing to and maintaining OSS projects. You can find Steve on most social media platforms as @stevejgordon

5 thoughts to “Docker for .NET Developers (Part 4) Working with docker-compose and multiple ASP.NET Core microservices

  1. Dear Steve,

    Thanks for the great articles about Docker. You describe your approach and motivation in an excellent way, which makes it easy for Docker newbies like me.
    One question about your Docker file organization: as far as I understood, you keep the Docker files next to each microservice in the same code repo (e. g. Git). But where does your Docker compose file live? Is it located in a separate code repo (which seems a bit over the top for me)?

    Best regards

    1. Hi,

      Thanks for your comment. I’m very pleased that these posts are hitting the mark and people are finding them useful.

      Good question on the docker-compose file. In our case we include this in a small tooling repo we have. We have a number of useful scripts and some documentation in this repo that meant it was a reasonable place for us to include the docker-compose file. We could just as easily had this file checked into the front end website repo so that front end devs would have it available to them.

      I hope that helps?
      Steve

    1. Hi Tyler,

      That’s correct and you can run NGINX in a container too if you want! I also believe that an AWS ALB is fine as a proxy in front of kestrel based containers.

      Steve

Leave a Reply

Your email address will not be published. Required fields are marked *