Debugging ASP.NET Core 2.0 Source Code Using ASP.NET Core symbols with Source Link support

In the early days of ASP.NET Core 1.0, before Visual Studio 2017, back when we had the project.json project format, we were able to take advantage of a nice feature to enable source debugging of referenced libraries and code. We could add the paths of cloned source repositories into our global.json file and these would then be preferred over Nuget when available.

I used this quite extensively when writing some of my posts for my MVC Core anatomy series (something I hope to get continue with at some stage). I blogged about how we could set this up in this previous post – Debugging into ASP.NET Core Source. Unfortunately with the switch back to MSBuild and csproj, we lost the ability to easily debug through the ASP.NET Core source files.

Yesterday during the ASP.NET Community Standup Jon Galloway highlighted a tweet by David Fowler regarding new ASP.NET Core source linking support. Damian Edwards went on to provide some more detail about this new feature and afterwards I decided to take a quick look at it myself. I expect this post to serve as an early introduction to source linking and I will hopefully blog on more detailed elements once I’ve had time to explore them more fully. For this post we’ll focus on how we can get started with debugging into the ASP.NET Core source using source linking.

What is Source Linking?

Let me caveat this explanation with the fact that symbol files are not something I’ve previously messed around with besides knowing that PDB files existed. I’m coming at this blog relatively fresh. I’ll be explaining things as I’ve understood them so far and will happily make corrections and updates if necessary.

Like me, you may have noticed PDB files being created under some circumstances when compiling your code. These files hold the symbol information which can optionally be used to support debugging into external source. Some types of symbol files may contain some of that source code or mappings to the source code.

For a long time Microsoft have hosted Symbol servers which hold published symbol files for the Microsoft products such as .NET Framework and ASP.NET Core. Visual Studio supports downloading symbols dynamically. To do this you must disable the “Enable Just My Code” option in the Debugging > General options. By default this option is enabled in Visual Studio.

Enable Just My Code Option

For more information on Symbols, Symbol Servers etc see this MSDN link.

Source linking allows you to embed a manifest inside the symbol file. For a given method name and location in the method that is being called it can identify what file contained the code and where it can be retrieved from. The ASP.NET Core libraries (not .NET Core currently) now support Source Linking and provide links to the code which is hosted on GitHub.

Enabling and Using Source Linking

The first requirement is that you are running Visual Studio 2017 on the latest update (15.3) which added Source Link support. With this installed if you check the Debugging > General options you will see Source Link enabled.

Enable Source Link support

As well as ensuring “Enable Just my Code” is not checked you must also enable the Microsoft symbol servers. In the Debugging > Symbols options you can check the “Microsoft Symbol Servers” from the list of symbol file locations.

Enable Microsoft Symbol Servers

When enabling the symbol servers you will need to accept the possibly the performance impact that it may introduce when debugging.

Symbol Server Performance

We are now setup and ready to debug into the ASP.NET Core source. To test this I created a default ASP.NET Core 2.0 MVC project inside Visual Studio. I then added a break point to the Index action on the HomeController. I then started debugging the application. The first time when debugging you may see messages like this in the status bar.

Loading Symbols

This is the symbol files being downloaded and may take a short while to complete.

Once you application is running and the breakpoint in your code is hit you can navigate down the call stack to see all of the external ASP.NET code that is being executed.

Call Stack Source Link

 If you double click any of these calls the editor will use the symbols to determine where that code is located for the frame. Using the link inside the symbols file, Visual Studio will download the source file from GitHub. When Source Link needs to download source you will see a warning dialog like this:

Source Link Dialog

You can chose the first option to download for this specific source file and continue to debug using that file. If you chose the first option you will see this dialog for each new source file that is required. You can select the second option instead which will download the file and disable the warning for future files.

Now that we have the source it will be displayed at the appropriate location from the frame you selected.

Source Link External Code

Now that we have the source file, you can also add your own breakpoint somewhere else within that file which will then be set to be hit when debugging your application. Even if we stop debugging and start again, this still seems to be hit successfully.

External Source Breakpoint

I was also curious about how we could set breakpoints in other parts of the code, without having to rely on access the source via the call stack. Damian mentioned a feature I’d never used where we can set a breakpoint manually from the breakpoint window. I have attempted to set a new function breakpoint up to one of the ASP.NET Core methods but so far I haven’t been able to get it working as Visual Studio states that the source is not available. I will pursue this and hopefully include details in a future post.

Summary

It’s great to see the beginning of easier debugging of external source coming to ASP.NET Core. Already there is value that I can gain from this feature to allow me to debug into the ASP.NET Core source to understand the internal workings. It’s not a sweet as what we were able to do back in project.json days but is certainly a step forward since we went back to csproj and MSBuild.

I miss the simplicity of cloning the full source for a repository and being able to navigate through it and add breakpoints to the external code. The source linking mechanism is good for more specific debugging cases where you want to dive into the external call stack. At the moment, once you have the source cs file via Source Link, you can’t navigate to other methods so exploring the code and setting further breakpoints is not possible outside of that file. If I can get the manual breakpoints working then that will be slightly better as at least I can view the source and determine methods I might want to break on, then set those up manually.

Chatting to David Fowler on Twitter I also understand that more features are being planned so I’ll be watching for those with interest. In addition, work is underway to get this supported by other non-Microsoft OSS projects such as Xunit and maybe in the future, JSON.NET. This repository (which I’ve not dug through yet) provides some build tools which can help creating source link symbols. I will also be looking at this more in the future.

Further References

Portable PDB files

Source Link

Upgrading to ASP.NET Core 2.0 My experience of upgrading a real-world solution from ASP.NET Core 1.0 to 2.0

On the 14th of August, Microsoft announced the release of .NET Core 2.0, ASP.NET Core 2.0 and EF Core 2.0, the next major releases of their open source, cross platform frameworks and libraries. This is a very exciting release and one which I hope marks the stabilisation of the framework and enables more developers and businesses to begin really looking at using .NET Core 2.0.

One of the big changes with .NET Core 2.0 is support for the new .NET Standard 2.0 specification (also part of the release announcements) which defines the API surface that platforms should conform to. This brings back around 20,000 APIs that were not originally included in .NET Core 1.x. This should mean that porting existing full .NET Framework applications over to Core may now be a more realistic prospect with much greater parity between the frameworks.

As I have discussed a few times on this blog, I contribute to a fantastic project called allReady, managed by the charity, Humanitarian Toolbox. This project started originally in the early beta days of .NET Core and ASP.NET Core and has evolved along with the framework through the various changes and refinements. With the release of 2.0 we were keen to upgrade the application to use .NET Core 2.0 and ASP.NET Core 2.0. I took it upon myself to attempt to upgrade allReady and to document the experience as I went. Hopefully I’ve found the right balance of detail to readability for this one which has been a bit of an epic!

Installing .NET Core 2.0

The first step you will need to complete is to install the new 2.0 SDK and if you use Visual Studio as your IDE of choice, you will also need to install the latest version of Visual Studio 15.3.x in order to work with .NET Core 2.0. These steps are well documented and pretty easy.

Upgrading the MVC Project

Upon loading the allReady web solution in Visual Studio 15.3 (aka 2017 update 3), my first focus was on upgrading the web project and getting it to run. I therefore unloaded the test project so that I wasn’t distracted by errors from that.

Many of the main steps that I followed as I upgraded the solution can be found outlined in the Microsoft .NET Core 1.0 to 2.0 migration guide.

Upgrading the Project File and Dependencies

The first job was to upgrade the project to target .NET Core 2.0 and to upgrade its dependencies to request the ASP.NET Core 2.0 packages. To do this I right clicked my project and chose to edit the csproj file directly. With .NET Core projects we can now do this without having to unload the project first. .NET Core projects have a targetFramework node which in our case was set to netcoreapp1.0. To upgrade to target the latest Target Framework Moniker (TFM) for Core 2.0 I simply changed this to netcoreapp2.0.

Our project file also included a runtimeFrameworkVersion property set to 1.0.4 which I removed to ensure that the project would use the latest available runtime. The migration guide also specifies that the PackageTargetFallback node and variable should be renamed to AssetTargetFallback and so I made that change.

The next big change was to begin using a new ASP.NET Core meta package to define our dependencies. One of the drawbacks that people have experiences with depending on the many individual Nuget packages which make up ASP.NET Core platform is that management of the package versions can be a bit painful. Each package can have slightly different minor version numbers as they revision separately. During a patch release of ASP.NET Core for example, it can be hard to know which exact versions represent the latest of each of the packages as they don’t necessarily all update together.

The ASP.NET team are hoping to solve this with the availability of a new Microsoft.AspNetCore.All metapackage. This package contains dependencies to all of the common Microsoft.AspNetCore, Microsoft.EntityFrameworkCore and Microsoft.Extensions packages. You can now reference just this package to enable you to work with all of the ASP.NET Core and EF Core components.

One of the changes that enables this is in the inclusion of a .NET Core runtime store which contains all of the required runtime packages. Since the packages are part of the runtime, your app won’t need to download many tens of dependencies from Nuget. The runtime store assets are also precompiled which helps with performance.

To make use of the new meta package I first removed all existing ASP.NET related dependencies from my explicit project package references. I could then add in the following reference: <PackageReference Include=”Microsoft.AspNetCore.All” Version=”2.0.0″ />. 

The final change in the project file was to update the versions for the .NET Core CLI tools specified in the DotNetCliToolReferenence nodes for our project file. In each case I moved them to the 2.0.0 version. With this completed I was able to save and close the project file, which triggers a package restore.

Our project file went from this:

to this:

The next thing I needed to do was to remove a global.json file that we had in our solution which was forcing the use of a specific SDK version; in our case 1.0.1. We want our project to use the latest SDK so I removed this file entirely. At this point I was in a position to attempt to compile the web project. As expected the build failed and a number of errors were listed that needed to work through fixing.

Identity / Authentication Changes

With ASP.NET Core 2.0, some of the biggest breaking changes occur in the Identity namespace. Microsoft have adjusted quite a few things regarding the Identity models and authentication. These changes did require some fixes and restructuring of our code to comply with the new model. Microsoft put together a specific migration document which is worth reviewing if you need to migrate Identity code.

The first change was to temporarily comment out some code we have as an extension to the IApplicationBuilder. I would use this code to ensure I had fully replicated the required setup before removing it. We used this code to conditionally “use” the various 3rd party login providers within our project; for example – UseFacebookAuthentication. One of the changes made with Identity in ASP.NET Core 2.0 is that third party login providers are now configured when registering the Authentication services and are no longer added as individual middleware components.

To account for this change I updated our ConfigureServices method to use the new AddAuthentication extension method on the IServiceCollection. This also includes extension methods on the returned AuthenticationBuilder which we can use to add and configure the additional authentication providers. We conditionally register our providers only if the application configuration includes the required App / Client Id for each provider. We do this with multiple, optional calls to the AddAuthentication method. I’ve checked and this is a safe approach to meet this requirement. At this point I could replicate the 3rd party authentication configuration that we had previously setup using the UseXYZAuthentication IApplicationBuilder extensions.

With this complete, our Configure method could be updated to include the call to UseAuthentication which adds the authentication middleware. The commented code could now be removed.

IdentityCookieOptions

Our account controller (based on the original ASP.NET Core MVC template) had a dependency on IOptions<IdentityCookieOptions> to get the ExternalCookieAuthenticationScheme name. This is now redundant in 2.0 as these are now available via constants and we can use that constant directly in our login action as per the authentication migration guide.

In 1.0 we set our AccessDeniedPath for the cookie options as one of the options on the AddIdentity extension for the IServiceCollection. Where we previpusly set it as follows:

There is now a specific extension to configure the application cookie where we set this value so I added that code to ConfigureServices.

The next change is that IdentityUser and IdentityRole have been moved from the Microsoft.AspNetCore.Identity.EntityFrameworkCore namespace to Microsoft.AspNetCore.Identity; so our using statements needed to be updated to reflect this change in any classes referencing either of these.

Next on my build error hit list was an error caused by Microsoft.AspNetCore.Authentication.FailureContext no longer being found. This has been renamed to RemoteFailureContext in ASP.NET Core 2.0 so I updated the affected code.

Another change as part of Identity 2.0 is that the Claims, Roles and Login navigation properties which we made use of have been removed from the base IdentityUser class. As a result I needed to add these back into our derived ApplicationUser class directly and update the OnModelCreating method inside our DbContext to define the correct foreign key relationships. This was as described in the migration guide for Authentication and Identity.

A small change I had to take care of is that GetExternalAuthenticationSchemes has been made Async (and renamed accordingly) so I updated our code to call and await the GetExternalAuthenticationSchemesAsync method – The return type has also changed, so I also needed to update one of our view models to take the resulting list of AuthenticationSchemes rather than AuthenticationDescriptions.

The final authentication change was the result of a new set of extension methods being added to HttpContext in Microsoft.AspNetCore.Authentication. These are intended to be used for calling the SingOutAsync and similar methods which were previously available via the IAuthenticationManager.

In places where we called these I changed from

await httpContext.Authentication.ChallengeAsync();

to

await httpContext.ChallengeAsync();

Other Changes / Build Errors

With the authentication and Identity related changes completed I still had a few build errors to take care of before the application would compile.

In 1.1.0 Microsoft added an additional result type of AcceptedResult (the issue is available here) and a helper method on ControllerBase to easily return this result. Since we had been target 1.0.x we had not faced this change before. Our SmsResponseController was exposing a constant string called “Accepted” which then hid the new inherited member on ControllerBase. I renamed our member to avoid this naming conflict.

We also found that Microsoft.Net.Http.Headers.ContentDispositionHeaderValue.FileName had changed from being defined as a string to a StringSegment instead. This meant we had to update code which was calling Trim on it to first call ToString on the StringSegment value.

In one place we were using a previously available TaskCache.CompletedTask to get a cached instance of a completed Task. However, since Task.CompletedTask is now available due to targeting NetStandard 2.0 this had been removed so our code could switch to using Task.CompletedTask instead.

Other Migration Changes

There are some other structural changes we can and should make to an existing ASP.NET Core 1.x project to take advantage of the ASP.NET Core 2.0 conventions. The first of these was to update program.cs to use the newer CreateDefaultBuilder functionality. This method is designed to simplify the setup of an ASP.NET Core WebHost by defining some common defaults which we previously had to setup manually in the Startup class. It adds in Kestrel and IISIntegration for example. The IWebHost in 2.0 now also sets up configuration and logging, registering them with DI earlier in the application lifecycle. The defaults work for basic applications but depending on your requirements you may need to use the ConfigureLogging and ConfigureAppConfiguration methods to apply additional setup of these components.

Out program.cs changed from:

to

Now that Configuration and Logging are setup on the IWebHost, we no longer need to define the setup for those components in the Startup.cs file, so I was able to strip out some code from Startup.cs. In 1.x we used the constructor of Startup to use the ConfigurationBuilder to setup Configuration. This could be taken out entirely. Instead we could ask for an IConfiguration object in the parameters which will be satisfied by DI as it is now registered by default.

I was also able to remove the logging setup which used an ILoggerFactory in the Configure method in 1.x. This is now also setup earlier by the IWebHost which feels like a better place for it. It also means we get more logging during the application bootstrapping. One change I made as a result of relying on the defaults for the logging setup was to rename our config.json file to appSettings.json. appsettings.json is included by default using the new CreateDefaultBuilder so it’s better that our config file matches this convention.

Finally, ApplictionInsights is now injected into our application by Visual Studio and Azure using a hook that lets them place code into the header and body tags, so we no longer need to manually wire up the ApplicationInsights functionality. This meant I could strip the registration of the service and also remove some code in our razor layout which was adding the javascript for ApplicationInsights.

From out ConfigureServices method I removed:

services.AddApplicationInsightsTelemetry(Configuration);

From our _ViewImports.cshtml file I removed

@inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet

From the head section of our_Layout.cshtml file I removed

@Html.Raw(JavaScriptSnippet.FullScript)

Partial Success!

At this point the code was able to compile but I hit some runtime errors when calling context.Database.Migrate in our Configure method:

“Both relationships between ‘CampaignContact.Contact’ and ‘Contact’ and between ‘CampaignContact’ and ‘Contact.CampaignContacts’ could use {‘ContactId’} as the foreign key. To resolve this configure the foreign key properties explicitly on at least one of the relationships.”

And

“Both relationships between ‘OrganizationContact.Contact’ and ‘Contact’ and between ‘OrganizationContact’ and ‘Contact.OrganizationContacts’ could use {‘ContactId’} as the foreign key. To resolve this configure the foreign key properties explicitly on at least one of the relationships.”

To solved these issues I updated our DbContext fluent configuration in OnModelCreating to explicitly define the relationships and foreign key.

From:

To:

This got me a step further but I then hit the following error:

System.Data.SqlClient.SqlException: ‘The name “Unknown” is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.’

I tracked this down to a migration which sets a default value on an integer column using an enum. I found that I needed to explicitly cast the enum to int to make this migration work as expected.

Another step forward; but I still ran into issues. The next error I received was System.ObjectDisposedException: ‘Cannot access a disposed object.’ from Startup.cs when calling await SampleData.CreateAdminUser();

This was caused by a naughty use of async void for the Configure method. I removed the async keyword and used GetAwaiter().GetResult() instead since async void is not a good idea!

By this point I was really hoping I was getting somewhere. However next I had some odd issues with our TagHelpers. We have two tag helpers used to aid some datetime functionality. The errors I was seeing seemed to be due to the TagHelpers getting invoked for the head and body elements of the page. I’ve yet to spend enough time to track down what causes this so have applied workarounds for now.

On our TimeZoneNameTagHelper we were getting a null object error when this tried to apply for the head tag. We expect a TimeZoneId to be supplied via an attribute which was not present on the head tag and so this resulted in null TimeZoneId when we tried to use it to lookup the time zone with FindSystemTimeZoneById. The temporary fix in this case was to check the TimeZoneId for null and just returning if so.

With our TimeTagHelper I had to do an explicit check within the Process method to ensure the TagName matched “time”. This avoided it being applied for the head and body tags. I have created follow-up issues to try to understand this behaviour.

With these changes in place, the code was finally compiling and running. Yay!

Upgrading the Test Project

With the main web application working I was ready to focus on upgrading the test project and making it compile (and seeing if the tests would pass!) The first step here was updating the project file to target netcoreapp2.0 as I had done with the web project. I also updated some of the dependencies to the latest stable versions. This was partially required in order to restore packages and it also made sense to do it at this point since I already had a lot of changes to include. Some of our dependencies were still old pre RTM packages. I also took the chance to clean out some unnecessary nodes in the project file.

With the packages restoring, attempting a build at this stage left me with 134 build errors! Some as a result of changes to Identity, some due to upgrading the dependencies and some due to code fixes made to the main project as a result of the migration.

The first broken tests I focused on where any that had broken due to the Identity changes. These were relatively quick to update such as fixing changes namespaces.

to

I then had a number of tests which were broken due to a change in Moq, the library we use for mocking objects in our tests. When setting up methods of mocked objects we could previously return a null quite simply passing null as the parameter to ReturnsAsync. However there is now another extension method also accepting a single parameter and the compiler is not sure which one we are intending to use. So this now requires that we explicitly cast this as a null of the correct type to indicate we are passing the expected value and not a delegate which returns the value. This resulted in me having to update 46 tests.

The remainder of build failures were mostly caused by changing the number of parameters for the AccountController constructor so our tests which were creating one as the subject under test needed to be updated also to match the correct number of parameters.

At this point I had compiling test code and I was then able to run my tests! Oh, 101 failed tests!

When I looked a little deeper I noticed these were nearly all tests which used our InMemoryContextTest abstract base class which includes a registered instance of an InMemory DbContext on an IServiceProvider. With a bit of trial and error I realised that my queries were not returning any results, where previously they had in 1.o. When I experimented I found that it was in cases where our query called Include to eager load some of the related entities. However, our seed data for the test which populated the InMemory database for each test had not set those related entities. The InMemory provider does not enforce referential integrity and so there are no errors thrown when saving objects with missing required navigational properties.

In 1.x the query behaviour worked under this scenario but in 2.0 something had changed. I raised an issue about this one and the EF team responded quickly with… “The reason for the behaviour change is that now include is using navigation rewrite logic to construct the queries (whereas before we manually crafted include statements). Navigation rewrite produces INNER JOIN pattern for required relationships and LEFT JOIN pattern for optional. Before we would always hand-craft LEFT JOIN pattern, regardless of the relationship requiredness between child and parent.”

To correct for this I needed to ensure our test setups added the required related entities so that they would be returned from the queries as expected. In our actual code, running using the SqlProvider this is not an issue since the saves enforce the referential integrity.

With the tests fixed up I was finally at a point where everything compiled, ran and the tests were passing. I considered this a good place and was able to submit my PR to get the allReady project to 2.0 which was promptly merged in.

Summary

For the most part the migration documentation provided by Microsoft where very good and covered many of the things I actually experienced. In a few cases I found little extra things I needed to solve. For the most part the issues were around the tests and the EF changes probably took longest to isolate and then fix-up. It’s great to have been able to help move the project forward and get it to 2.0 very soon after release. It’s a great reference project for developers wanting to view (and hopefully work on) a real-world ASP.NET Core 2.0 solution. Hopefully my experience will help others during their migrations.

Implementing IHostedService in ASP.NET Core 2.0 Use IHostedService to run background tasks in ASP.NET Core apps

Update 30-08-2017: ASP.NET Core 2.0.0 is now released. I have updated my sample repo to 2.0.0.

I’ve had chance to play around with ASP.NET Core 2.0 preview 2 a little in the last few weeks. One of the things I was keen to try out and to understand a little better was the new IHostedService interface provided by Microsoft.Extensions.Hosting. David Fowler and Damian Edwards demonstrated an early example of how to implement this interface using the preview 1 of ASP.NET Core 2.0 at NDC Oslo. At the time of that demo the methods were synchronous but since then they have been made asynchronous.

Full disclosure: After taking a first pass at creating something using this interface I ran the code past David Fowler and he kindly reviewed it. As I suspected, I was not using it correctly! Since then David was kind enough to answer a few questions and even provided a sample of a base class that simplifies creation of hosted services. After my failure to understand the expected implementation of the interface and a general realisation that I needed to learn more about using Task cancellations with async/await, I almost decided to ditch my plan to write this blog post. However, I realised that this is still probably a good sample to share since others may run into the same mistakes I did. After speaking with David I believe this is appropriate use of the interface.

One of the challenges when starting out was trying to use something in preview that had no samples or documentation yet. While I hope no one will use this prior to RTM of 2.0 when I expect full documentation will be made available, I’m sure people may benefit from taking a look at it sooner. David did say that they have intentions to provide a formal base class, much like the code that he provided for me which will make creating these background hosted services easier. However, that won’t make it into the 2.0 release. The code I include in this sample might act as a good starting point until then, although it’s not fully tested.

Remember; there are no docs for this interface currently, so I’m taking a best guess at how it can be used and how it’s working based on what I’ve explored and been able to learn from David. This feature is preview and may also change before release (although very unlikely as 2.0 is nearly baked now). If you’re reading this in the future (and unless you’re a time traveller, you must be) please keep in mind this may be outdated.

Hosted Services

The first question to answer is what can we use this interface for? The basic idea is that it allows us to register background tasks, that run while our web host is running. These are co-ordinated with the lifetime of the application. We register a Task when the application starts and have the opportunity to do some graceful clean-up when the application is shutting down. While we could spin off work on a background thread previously, it would be killed when the main application process shutdown.

To create these background tasks we implement the new IHostedService interface.

The interface looks like this:

The idea is we register one or more implementations of this interface with the DI container, all of which will then be started and stopped along with the application by the HostedServiceExecutor. As users of this interface we are responsible for properly handling the cancellation and shutdown of our services when StopAsync is triggered by the host.

Creating a Hosted Service

One of the first possible use cases for these background tasks that I came up with was a scenario where we might want to update some content or data in our application from an external source, refreshing it periodically. Rather than doing that update in the main request thread, we can offload it to a background task.

In this simple example I provide an API endpoint which returns a random string value provided from an external service and updated every 5 seconds.

The first part of the code is a provider class which will hold the string value and which includes an update method that when called, will get a new string from the external service.

I then use this provider from my controller to return the string.

The main work for the setup of our service lives in an abstract base class called HostedService. This is the base class that David Fowler kindly put together. The code looks like this:

The comments in the class describe the flow. When using this base class we simply need to implement the ExecuteAsync method.

When the StartAsync method is called by the HostedServiceExecutor a CancellationTokenSource is created and linked to the token which was passed into the StartAsync method. This CancellationTokenSource is stored in a private field.

ExecuteAsync, an abstract method is then called, passing in the token from our CancellationTokenSource and the returned Task itself is stored. The StartAsync then must return as complete to the caller. A check is made in case our ExecuteAsync method is already completed. If not we return a Task.CompletedTask.

At this point we have a background task running whatever code we placed inside our implementation of ExecuteAsync. Our WebHost will go about its business of serving requests.

The other method defined by IHostedService is StopAsync. This method is called when the WebHost is shutting down. This is the key differentiator from running work on a traditional background thread. Since we have a proper hook into the shutdown of the host we can handle the proper shutdown of our workload on the background thread.

In the base class, first a check is made to ensure that StartAsync was previously called and that we actually have an executing task. Then we signal cancellation on our CancellationTokenSource. We await the completion of one of two things. The preferred option is that our Task which should be adhering to the cancellation token we passed to it, completes. The fall back is the Task.Delay(-1, cancellationToken) task completes. This takes in the cancellation token passed by the HostedServiceExecutor, which in turn is provided by the StopAsync method of the WebHost. By default this will by a token set with a 5 second timeout, although this timeout value can be configured when building our WebHost using the UseShutdownTimeout extension on the IWebHostBuilder. This means that our service is expected to cancel within 5 seconds otherwise it will be more abruptly killed.

To use this base class I then created a class inheriting from it called DataRefreshService. It is within this class that I implement the ExecuteAsync abstract method from HostedService. The ExcecuteAsync method accepts a cancellation token and looks like this:

I call the UpdateString method on the RandomStringProvider and then wait for 5 seconds before repeating. This all happens inside a while loop which continues indefinitely, until cancelation has been requested for the cancellation token. We pass the cancellation token down into the other async methods as well, so that they too can cancel their tasks all the way down the chain.

The final part of this is wiring up the dependency injection. Within the configure services method of Startup I must register the hosted service. I also register my RandomStringProvider class too, since that is passed into things via DI.

Summary

The IHostedService interface provides a nice way to properly start background work in a web application. It’s a feature we should not overuse, as I doubt it’s intended for spinning up large numbers of tasks, but for some scenarios it offers a nice solution. Its main benefit is the chance to perform proper cancellation and shutdown of our background tasks when the host itself is shutting down.

A special thanks to the amazing David Fowler for a quick code review (and correction) of my DataRefreshService. It was very kind of him to spare me some of his time to help me better understand this new feature. I hope I’ve explained everything correctly so that others can benefit from what he shared with me.

If you would like to view the source code from this post you can find it on my GitHub account.

Docker for .NET Developers Header

Docker for .NET Developers (Part 6) Using Docker for Build and Continuous Deployment

In part 3 I discussed one of the first motivations which led our team to begin using Docker. That motivation was focused on making the workflow for our front end developers quicker and simpler. In this post I want to explore a second motivation which led to us fully embrace Docker on our project. Just like part 3, this post doesn’t have any code samples; sorry! Instead I want to share the thought process and concepts from the next phase of our journey without going too technical. I believe this will give the next parts in this series a better context.

Why deploy with Docker?

Once we had Docker in place and we were reaping the benefits locally, we started to think about the options we might have to use Docker further along our development lifecycle, specifically for build and deployment.

Our current job board platform follows continuous delivery. Every time a developer checks in, the new code is picked up by the build system and a build is triggered. After a few minutes the code will have been built and all tests run. This helps to validate that the changes have not broken any existing functionality.

Deployments are then managed via Octopus deploy which will take the built code and deploy it onto the various environments we have. Code will be deployed onto staging and within that environment our developers have a chance to do some final checking that the new functionality is working as expected. Our testing team have the opportunity to run regression testing against the site to validate that no functionality has been broken. Once the testing is complete, the code is triggered for deployment onto our production environment. This is a manual, gated step which prevents code releasing without a developer or developers validating it first.

That flow looks like this:

Existing Continuous Delivery Flow

With our new project we agreed that ideally we wanted to get to a continuous deployment flow, where code is checked in, tested and deployed straight to live. That sounds risky I know and was something we weighed up carefully. A requirement of this approach is that we can fail fast and rapidly deploy a fix or even switch back to a prior version should the situation require it (we can get a fix to live in about ~5 minutes). By building in smaller discrete microservices we knew we would be reducing the complexity of each part of the system and could more easily test them. We are still working out some additional checks and controls that we expect to implement to further help prevent errors slipping out to live.

At the moment this involves many unit tests and some integration tests within the solutions using the TestHost and TestServer which are part of ASP.NET Core. I’m starting to think about how we could leverage Docker in our build pipeline to layer in additional integration testing across a larger part of the system. In principle we could spin up a part of the system automatically and then trigger endpoints to validate that we get the expected response. This goes a step further than the current testing as it tests a set of components working together, rather than in isolation.

One of the advantages that Docker provides is simplified and consistent deployments. With Docker, your create your images and these then become your unit of deployment. You can push your images to a container registry and then deploy that image to any Docker host you want. Because your image contains your application and all of its dependencies you can be confident that once deployed, your application will behave in the same way as it did locally.

Also, by using Docker to run your applications and services, you no longer need to maintain dependencies on your production hosts. As long at the hosts are running Docker, there are no other dependencies to install. This also avoids conflicts arising between applications running on the same host system. In a traditional architecture, if a team wants to deploy an application on the same host, requiring a newer version of a shared dependency, you may not be able to upgrade the host without introducing risk to the existing application.

Using Docker for Builds

In prior posts we’ve seen that we can use the aspnetcore-build image from Microsoft to perform builds of our source code into a final DLL. This opens the door to standardise the build process as well. We now use this flow for our builds, with our Jenkins build server being used purely to trigger the builds inside Docker. This brings similar benefits as I described for the production hosts. The build server does not need to have the ASP.NET Core SDK installed and maintained. Instead, we just need Docker and then can use appropriate build images to start our builds on top of all of the required dependencies. Using this approach we can benefit from reliable repeatability. We don’t have to worry about an upgrade on the build server changing how a build behaves. We can build applications that are targeting different ASP.NET Core versions by basing them on a build image that contains the correct SDK version.

Some may raise a question over what the difference is between Docker and Octopus or Docker vs Jenkins. They all have overlapping concerns but Docker allows us to combine the build process and deployment process using a single technology. Jenkins in our system triggers builds inside Docker images and we then ship the built image up to a private container registry (we use Amazon ECR which I’ll look at soon).

Octopus is a deployment tool, it expects to take built components and then handles shipping them and any required configuration onto deployment targets. With Docker, we ship the complete application, including dependencies and configuration inside the immutable Docker image. These images can be pulled and re-used on any host as required.

Why Jenkins?

In our case there was no particular driver to use Jenkins. We already had access to Jenkins running on a Linux VM within our internal network and saw no reason to try out a new build server. We asked our systems team to install Docker and we then had everything we needed to use this box to trigger builds. In future posts I’ll demonstrate our build scripts and process. I’m sure that most of the steps will translate to many other common build systems.

Hosting with AWS

A final decision that we had to make was around how we would host Docker in production. At the time our project began we were already completing a migration of all of our services into AWS. As a result, it was clear that our final solution would be AWS based. We had a look at the options and found that AWS offered a container service which is called Amazon ECS.

The options for orchestrating Docker are a little daunting, and at this time I haven’t explored alternative solutions such as DC/OS or Kubernetes. I’ve not personally explored them at this stage. Like Amazon ECS they are container orchestration services that schedule containers to run and maintain the required state of the system. They include things like container discovery to allow us to address and access the services we need. Amazon ECS is a managed service that abstracts away some of the complexities of setting these systems up and managing them. However, this abstraction comes with the cost of some flexibility.

With AWS ECS we can define tasks to represent the components of our system and then create services which maintain a desired count of containers running those tasks. Our production system is now running on ECS and the various components are able to scale to triggers such as queue length, CPU load and request volumes. In future posts I’ll dive into the details of how we’ve set up ECS. We now have created a zero downtime deployment process taking advantage of the features of ECS to start new version of containers, switching the load over when they are ready to handle requests.

Our current Docker based deployment flow looks like this:

Flow of a docker build

Developers commit into git locally and push to our internally hosted git server. Jenkins picks up on changes to the repository using the GitHub hook and triggers a build. We use a script to define the steps Jenkins will use, the resulting output is a Docker image. Jenkins pushes this image up to our private registry which is running in AWS on their EC2 Container Registry (ECR). Finally, Jenkins triggers an update on the Amazon container service to trigger starting new container instances. Once those instances are successfully started and passing the Application Load Balancer health checks, connections to the prior version of the containers are drained and those containers stopped and removed. We’ll explore the individual elements of this flow in greater depth in later blog posts.

Summary

In this post we have looked at a secondary motivation for using Docker in our latest project. We explored at a high level the deployment flow and looked at some of the practical advantages we can realise by using Docker images through the entire development and deployment lifecycle. We are still refining our approaches as we learn more but we found it fairly simple to get up to speed using Jenkins as a build system, via Docker. In the next set of posts I’ll dive into how we’ve setup that build process, looking at the scripts we use and the optimised images we generate to help improve start up time of containers and reduce the size of the images.

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – Exploring ASP.NET Runtime Docker Images
Part 6 – This post
Part 7 – Setting up Amazon EC2 Container Registry

Docker for .NET Developers Header

Docker for .NET Developers (Part 5) Exploring ASP.NET Runtime Docker Images

So far in previous posts we’ve been looking at basic demo dockerfiles which use the aspnetcore-build base image. This is okay for testing but does present some issues for actual deployment.

One disadvantage of the build image is its size. Since it contains all of the elements needed to build .NET Core applications it is fairly bloated and not something we would want to be using as a unit of deployment. It contains things like the full .NET Core SDK (which itself includes MSBuild), Node.js, Grunt, Gulp and a package cache for the pre-restored .NET packages. In all, this accounts for an image of around 1.2GB in size. You have to consider the network traffic that pushing around such large Docker images will introduce. If you use an external container registry (we’ll talk about those in a later post) such as Docker Hub, you will have to ship up the full size of the large SDK based image each time something changes.

Dissecting the aspnetcore-build Image

While it’s not really necessary to know the intricate details of the composition of the aspnetcore-build image, I thought it would be interesting to look a little at how it’s put together. As I’ve described previously, Docker images are layered. Each layer generally adds one thing or a set of related things into an image. Layers are immutable but you can base off of the previous layers and add in your layer on top. This is how you get your application into an image.

The ASP.NET Core build image is built up from a number of layers.

Layer 1

Starting from the bottom there is an official Docker image called scratch which is an empty base image.

Layer 2

The next layer is the Debian Linux OS. The .NET Core images are based on Debian 8 which is also known as Jessie. The image is named debian and also tagged with jessie. You can find the source files here.

Its dockerfile is pretty basic.

It starts with the scratch base image and then uses the ADD statement to bring in the tarball containing the debian root file system. One important thing to highlight here is the use of ADD and not COPY. Previously in my samples we used COPY in our dockerfile to copy in contents from the source directory into a destination directory inside the image. ADD is similar but in this case it does one important thing, it will decompress known tar archives. Since the rootfs.tar.xz is a known tar type, its contents are uncompressed into the specified directory, extracting all of the core Debian file system. I downloaded this file and it’s 117Mb in size.

The final line CMD [“bash”] line provides a default command that will run when the container first executes. In this case it runs the bash command. CMD is different from RUN in that it does not execute at build time, only at runtime.

Layer 3

The next layer is buildpack-deps:jessie-curl – Source files are here.

On top of the base image this RUNs three commands. You’ll notice each command is joined with an &&. Each RUN line in a dockerfile will result in a new intermediate image during build. To combat this in cases where we are doing related work, the commands can be strung together under a single RUN statement. This particular set of commands is a pretty common pattern and uses apt-get, a command line tool for working with application packages in Debian.

This structure it follows is listed in the Docker best practices as a way to ensure the latest packages are retrieved. apt-get update simply updates the package lists for new package and available upgrades to existing packages. This technique is known as “cache busting”. It then installs 3 packages using apt-get install.

I had to Google a bit but ca-certificates installs common certificate authorities based on those that ship with Mozilla. These allow SSL applications to verify the authenticity of SSL connections. It then installs the package with curl, a command line tool for transferring data via the URL syntax. Finally wget is a network utility used to retrieve files from the web using HTTP(S) and FTP.

The backslashes is another common convention in production dockerfiles. The backslash is a line continuation character that allows a single line to be split over multiple lines. It’s used to improve readability and the pattern here puts each new package onto a new line so it’s easier to parse the individual packages that will end up being installed. The apt-get command allows multiple packages to be specified with a space between packages.

The final command removes anything in the /var/lib/apt/lists/ directory. This is where the updated package lists that were pulled down using apt-get update are stored. This is another good example of best practice, ensuring that no files remain in the image that are not needed at runtime helps keep the image size down.

Layer 4

The next layer is buildpack-deps:jessie-scm – Source files are also found here.

This layer uses a similar pattern to the layer before it to install some packages via apt-get. Most of these are packages for the common distributed version control applications such as git. openssh-client installs a secure shell (SSH) client, for secure access to remote machines and the procps package seems to be some file system utilities.

Layer 5

The next layer is the microsoft/dotnet layer which will include the .NET Core SDK bits. The exact image will depend on which tag you choose since there are many tagged versions for the different SDK versions. They only really differ in that they install the correct version for your requirements. I’ll look at the 1.1.2-sdk tagged image. You can find the source here.

This dockerfile has a few comments and those explain the high level steps. I won’t dive into everything as it’s reasonably clear what’s happening. First the .NET CLI dependencies are installed via apt-get. Again this uses the pattern we’ve seen earlier.

Next the .NET Core SDK is downloaded in a tar.gz format. This is extracted and the tar file then removed. Finally it uses a Linux link command to create a soft link between the directory /usr/share/dotnet/dotnet and /usr/bin/dotnet.

The final section populates the local Nuget package cache by creating a new dotnet project and then removing it and any scratch files which aren’t needed.

Layer 6

The final layer before you start adding your own application is the microsoft/aspnetcore-build image layer. Again, there are variances based on the different SDK versions. The latest 1.1.2 image source can be found here.

First it sets some default environment variables. You’ll see for example it sets ENV ASPNETCORE_URLS http://+:80 so unless you override the values using the WebHostBuilder.UseUrls extension your ASP.NET Core application will run under port 80 inside the container.

The next two steps do some funky node setup which I won’t dive into.

Next it warms up the NuGet package cache. This time it uses a packagescache.csproj which if you take a look simply includes package references to all of the main ASP.NET related packages. It then calls dotnet restore which will download the packages into the package cache. It cleans up the warmup folder after this.

Finally it sets the working directory to the root path so that the image is clean to start building on in the next layer which will include your application.

Runtime Images

Given the size of the build images and the fact that there’s no need to include the files used to build you application when you deploy it, it is much better practice to try and reduce the contents of your final image to make it as small as possible. It’s also important to optimise it for rapid start-up times. That’s exactly what the aspnetcore image is for. This image only contains the minimal .NET core runtime and so results in a much smaller base image size of 316MB. It’s about one quarter of the size of the build image! This means that it doesn’t include the SDK so cannot issue commands such as dotnet build and dotnet restore. It can only bootstrap compiled .NET core assemblies.

Dissecting the microsoft/aspnetcore image

As we did with the build image, we’ll take a look at the differences in the runtime image.

Layers 1 and 2

The first two layers than make up the final aspnetcore image are the same as with the build image. After the base debian:jessie layer though things differ.

Layer 3

This layer is named microsoft/dotnet and is tagged for the different runtime versions. I’ll look at the 1.1-runtime-deps tagged image which can be found here.

The docker file for this layer is:

This installs just the certificate authorities since we no longer get those from the jessie:curl image which is not used in the prior layers. It then also installs the common .NET Core dependencies.

Layer 4

This layer is named microsoft/dotnet and tagged 1.1.2-runtime which can be found here.

This image installs curl and then uses that to download the dotnet runtime binaries. These are extracted and the tar file removed.

Layer 5

The final layer before your application files, this layer is named microsoft/aspnetcore and tagged with 1.1.2 for the latest 1.1.x version. It can be found here.

Starting with the dotnet runtime image this sets the URL environment variable and populates the Nuget package cache as we saw with the build image. As explained in the documentation it also includes a set of native images for all of the ASP.NET Core libraries. These are intended to speed up the start up of the container since they are native images and don’t need to be JITed.

Using the Runtime Image

The intended workflow for .NET Core based ASP.NET Docker images is to create a final image that contains your pre-built files, and specifically only the files explicitly required by the application at runtime. This will generally be the dlls for your application and any dependencies.

There are a couple of strategies to achieve these smaller images. For this blog post I’m going to concentrate on a manual process we can follow locally to create a runtime-only image with our built application. It’s very likely that this is not how you’ll end up producing these images for real projects and real deployment scenarios, but I think it’s useful to see this approach first. In later blog posts we’ll expand on this and explore a couple of strategies to use Docker containers to build our code.

I’ve included a simple demo application that you can use to follow along with this post. It contains a single ASP.NET Core API project and includes a dockerfile which will define an image based on the lightweight aspnetcore image. If you want to follow along you can get the code from GitHub. Let’s look at the contents of the dockerfile.

Much of this looks very similar to the dockerfiles we’ve looked atin my previous posts, but with some key differences. The main one is that this dockerfile defines an image based on the aspnetcore image and not the larger aspnetcore-build image.

You’ll then notice that this dockerfile expects to copy in files from a publish folder. In order for this file to work, we will first need to publish our application to that location. To publish the solution I’m going to use the command line to run dotnet restore and then use the following command:

dotnet publish -c Release -o ../../publish

The output from running this command looks like this:

Microsoft (R) Build Engine version 15.3.117.23532
Copyright (C) Microsoft Corporation. All rights reserved.

DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\src\DockerDotNetDevsSample3\bin\Release\netcoreapp1.1\DockerDotNetDevsSample3.dll
DockerDotNetDevsSample3 -> E:\Software Development\Projects\DockerDotNetDevsSample3\publish\

This command uses the .NET SDK to trigger a build of the application and then publishes the required files into the publish folder. In my case this produces the publish output and copies it to a folder named publish in the root of my solution, the same location as my dockerfile. I do this by passing in the path for the published output using -o. We also set it to publish in release mode using the -c switch to set the configuration. We’ll pretend we’d use this image for a deployment somewhere to production, so release makes sense.

Now that we have some files in our publish folder, the main one being the dll for our assembly we will be able to use those files inside our image.

Back to the dockerfile, after copying all of the published files into the container you’ll notice that we no longer need to run the dotnet restore and dotnet build commands. In fact, trying to do so would fail since the base image does not include the SDK, these commands would not be known. We already have our restored and built files which we copied into the image.

The final difference you will see is that the entrypoint for this image is a bit different. In our earlier examples we used dotnet run in the working directory containing our csproj file. Again this relied on the SDK which we don’t have. This dockerfile uses dotnet.exe directly against the DockerDotNetDevsSample3.dll. dotnet.exe will bootstrap and fire into the main method of our application.

Let’s build an image from this dockerfile and take a look at what happens.

docker build -t dockerdemo3 .

The output looks like this:

Sending build context to Docker daemon 33.14 MB
Step 1/4 : FROM microsoft/aspnetcore:1.1
---> 3b1cb606ea82
Step 2/4 : WORKDIR /app
---> c20f4b67da95
Removing intermediate container 2a2cf55d8c10
Step 3/4 : COPY ./publish .
---> 23f83ca25308
Removing intermediate container cdf2a0a1c6c6
Step 4/4 : ENTRYPOINT dotnet DockerDotNetDevsSample3.dll
---> Running in 1783718c0ea2
---> 989d5b6eae63
Removing intermediate container 1783718c0ea2
Successfully built 989d5b6eae63
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

As you can see, the docker build is very quick this time since it is not running the restore and .NET build steps. It’s grabbing the pre-built files and setting up the entry point.

I can now run this image using

docker run -d -p 8080:80 dockerdemo3

Navigating to http://localhost:8080/api/values we should see the data from the API.

Summary

In this post we’ve looked in some detail at the layers that make up the aspnetcore-build image and compared them to the layers in the aspnetcore image which just includes the .NET Core runtime. We’ve then seen how we can generate a small runtime based image for our own application which will be much smaller and therefore better to store in a remote registry and quicker to pull down. In future posts we’ll look a other methods that allow us to use Docker for the build and publish steps within a build system, as well as looking at some other things we can do to ensure we minimise the file size of our layers of the image.

Other Posts In This Series

Part 1 – Docker for .NET Developers Introduction
Part 2 – Working with Docker files
Part 3 – Why we started using Docker with ASP.NET Core
Part 4 – Working with docker-compose and multiple ASP.NET Core Microservices
Part 5 – This post
Part 6 – Using Docker for Build and Continuous Deployment
Part 7 – Setting up Amazon EC2 Container Registry