Upgrading to ASP.NET Core 2.0 My experience of upgrading a real-world solution from ASP.NET Core 1.0 to 2.0

On the 14th of August, Microsoft announced the release of .NET Core 2.0, ASP.NET Core 2.0 and EF Core 2.0, the next major releases of their open source, cross platform frameworks and libraries. This is a very exciting release and one which I hope marks the stabilisation of the framework and enables more developers and businesses to begin really looking at using .NET Core 2.0.

One of the big changes with .NET Core 2.0 is support for the new .NET Standard 2.0 specification (also part of the release announcements) which defines the API surface that platforms should conform to. This brings back around 20,000 APIs that were not originally included in .NET Core 1.x. This should mean that porting existing full .NET Framework applications over to Core may now be a more realistic prospect with much greater parity between the frameworks.

As I have discussed a few times on this blog, I contribute to a fantastic project called allReady, managed by the charity, Humanitarian Toolbox. This project started originally in the early beta days of .NET Core and ASP.NET Core and has evolved along with the framework through the various changes and refinements. With the release of 2.0 we were keen to upgrade the application to use .NET Core 2.0 and ASP.NET Core 2.0. I took it upon myself to attempt to upgrade allReady and to document the experience as I went. Hopefully I’ve found the right balance of detail to readability for this one which has been a bit of an epic!

Installing .NET Core 2.0

The first step you will need to complete is to install the new 2.0 SDK and if you use Visual Studio as your IDE of choice, you will also need to install the latest version of Visual Studio 15.3.x in order to work with .NET Core 2.0. These steps are well documented and pretty easy.

Upgrading the MVC Project

Upon loading the allReady web solution in Visual Studio 15.3 (aka 2017 update 3), my first focus was on upgrading the web project and getting it to run. I therefore unloaded the test project so that I wasn’t distracted by errors from that.

Many of the main steps that I followed as I upgraded the solution can be found outlined in the Microsoft .NET Core 1.0 to 2.0 migration guide.

Upgrading the Project File and Dependencies

The first job was to upgrade the project to target .NET Core 2.0 and to upgrade its dependencies to request the ASP.NET Core 2.0 packages. To do this I right clicked my project and chose to edit the csproj file directly. With .NET Core projects we can now do this without having to unload the project first. .NET Core projects have a targetFramework node which in our case was set to netcoreapp1.0. To upgrade to target the latest Target Framework Moniker (TFM) for Core 2.0 I simply changed this to netcoreapp2.0.

Our project file also included a runtimeFrameworkVersion property set to 1.0.4 which I removed to ensure that the project would use the latest available runtime. The migration guide also specifies that the PackageTargetFallback node and variable should be renamed to AssetTargetFallback and so I made that change.

The next big change was to begin using a new ASP.NET Core meta package to define our dependencies. One of the drawbacks that people have experiences with depending on the many individual Nuget packages which make up ASP.NET Core platform is that management of the package versions can be a bit painful. Each package can have slightly different minor version numbers as they revision separately. During a patch release of ASP.NET Core for example, it can be hard to know which exact versions represent the latest of each of the packages as they don’t necessarily all update together.

The ASP.NET team are hoping to solve this with the availability of a new Microsoft.AspNetCore.All metapackage. This package contains dependencies to all of the common Microsoft.AspNetCore, Microsoft.EntityFrameworkCore and Microsoft.Extensions packages. You can now reference just this package to enable you to work with all of the ASP.NET Core and EF Core components.

One of the changes that enables this is in the inclusion of a .NET Core runtime store which contains all of the required runtime packages. Since the packages are part of the runtime, your app won’t need to download many tens of dependencies from Nuget. The runtime store assets are also precompiled which helps with performance.

To make use of the new meta package I first removed all existing ASP.NET related dependencies from my explicit project package references. I could then add in the following reference: <PackageReference Include=”Microsoft.AspNetCore.All” Version=”2.0.0″ />. 

The final change in the project file was to update the versions for the .NET Core CLI tools specified in the DotNetCliToolReferenence nodes for our project file. In each case I moved them to the 2.0.0 version. With this completed I was able to save and close the project file, which triggers a package restore.

Our project file went from this:

to this:

The next thing I needed to do was to remove a global.json file that we had in our solution which was forcing the use of a specific SDK version; in our case 1.0.1. We want our project to use the latest SDK so I removed this file entirely. At this point I was in a position to attempt to compile the web project. As expected the build failed and a number of errors were listed that needed to work through fixing.

Identity / Authentication Changes

With ASP.NET Core 2.0, some of the biggest breaking changes occur in the Identity namespace. Microsoft have adjusted quite a few things regarding the Identity models and authentication. These changes did require some fixes and restructuring of our code to comply with the new model. Microsoft put together a specific migration document which is worth reviewing if you need to migrate Identity code.

The first change was to temporarily comment out some code we have as an extension to the IApplicationBuilder. I would use this code to ensure I had fully replicated the required setup before removing it. We used this code to conditionally “use” the various 3rd party login providers within our project; for example – UseFacebookAuthentication. One of the changes made with Identity in ASP.NET Core 2.0 is that third party login providers are now configured when registering the Authentication services and are no longer added as individual middleware components.

To account for this change I updated our ConfigureServices method to use the new AddAuthentication extension method on the IServiceCollection. This also includes extension methods on the returned AuthenticationBuilder which we can use to add and configure the additional authentication providers. We conditionally register our providers only if the application configuration includes the required App / Client Id for each provider. We do this with multiple, optional calls to the AddAuthentication method. I’ve checked and this is a safe approach to meet this requirement. At this point I could replicate the 3rd party authentication configuration that we had previously setup using the UseXYZAuthentication IApplicationBuilder extensions.

With this complete, our Configure method could be updated to include the call to UseAuthentication which adds the authentication middleware. The commented code could now be removed.

IdentityCookieOptions

Our account controller (based on the original ASP.NET Core MVC template) had a dependency on IOptions<IdentityCookieOptions> to get the ExternalCookieAuthenticationScheme name. This is now redundant in 2.0 as these are now available via constants and we can use that constant directly in our login action as per the authentication migration guide.

In 1.0 we set our AccessDeniedPath for the cookie options as one of the options on the AddIdentity extension for the IServiceCollection. Where we previpusly set it as follows:

There is now a specific extension to configure the application cookie where we set this value so I added that code to ConfigureServices.

The next change is that IdentityUser and IdentityRole have been moved from the Microsoft.AspNetCore.Identity.EntityFrameworkCore namespace to Microsoft.AspNetCore.Identity; so our using statements needed to be updated to reflect this change in any classes referencing either of these.

Next on my build error hit list was an error caused by Microsoft.AspNetCore.Authentication.FailureContext no longer being found. This has been renamed to RemoteFailureContext in ASP.NET Core 2.0 so I updated the affected code.

Another change as part of Identity 2.0 is that the Claims, Roles and Login navigation properties which we made use of have been removed from the base IdentityUser class. As a result I needed to add these back into our derived ApplicationUser class directly and update the OnModelCreating method inside our DbContext to define the correct foreign key relationships. This was as described in the migration guide for Authentication and Identity.

A small change I had to take care of is that GetExternalAuthenticationSchemes has been made Async (and renamed accordingly) so I updated our code to call and await the GetExternalAuthenticationSchemesAsync method – The return type has also changed, so I also needed to update one of our view models to take the resulting list of AuthenticationSchemes rather than AuthenticationDescriptions.

The final authentication change was the result of a new set of extension methods being added to HttpContext in Microsoft.AspNetCore.Authentication. These are intended to be used for calling the SingOutAsync and similar methods which were previously available via the IAuthenticationManager.

In places where we called these I changed from

await httpContext.Authentication.ChallengeAsync();

to

await httpContext.ChallengeAsync();

Other Changes / Build Errors

With the authentication and Identity related changes completed I still had a few build errors to take care of before the application would compile.

In 1.1.0 Microsoft added an additional result type of AcceptedResult (the issue is available here) and a helper method on ControllerBase to easily return this result. Since we had been target 1.0.x we had not faced this change before. Our SmsResponseController was exposing a constant string called “Accepted” which then hid the new inherited member on ControllerBase. I renamed our member to avoid this naming conflict.

We also found that Microsoft.Net.Http.Headers.ContentDispositionHeaderValue.FileName had changed from being defined as a string to a StringSegment instead. This meant we had to update code which was calling Trim on it to first call ToString on the StringSegment value.

In one place we were using a previously available TaskCache.CompletedTask to get a cached instance of a completed Task. However, since Task.CompletedTask is now available due to targeting NetStandard 2.0 this had been removed so our code could switch to using Task.CompletedTask instead.

Other Migration Changes

There are some other structural changes we can and should make to an existing ASP.NET Core 1.x project to take advantage of the ASP.NET Core 2.0 conventions. The first of these was to update program.cs to use the newer CreateDefaultBuilder functionality. This method is designed to simplify the setup of an ASP.NET Core WebHost by defining some common defaults which we previously had to setup manually in the Startup class. It adds in Kestrel and IISIntegration for example. The IWebHost in 2.0 now also sets up configuration and logging, registering them with DI earlier in the application lifecycle. The defaults work for basic applications but depending on your requirements you may need to use the ConfigureLogging and ConfigureAppConfiguration methods to apply additional setup of these components.

Out program.cs changed from:

to

Now that Configuration and Logging are setup on the IWebHost, we no longer need to define the setup for those components in the Startup.cs file, so I was able to strip out some code from Startup.cs. In 1.x we used the constructor of Startup to use the ConfigurationBuilder to setup Configuration. This could be taken out entirely. Instead we could ask for an IConfiguration object in the parameters which will be satisfied by DI as it is now registered by default.

I was also able to remove the logging setup which used an ILoggerFactory in the Configure method in 1.x. This is now also setup earlier by the IWebHost which feels like a better place for it. It also means we get more logging during the application bootstrapping. One change I made as a result of relying on the defaults for the logging setup was to rename our config.json file to appSettings.json. appsettings.json is included by default using the new CreateDefaultBuilder so it’s better that our config file matches this convention.

Finally, ApplictionInsights is now injected into our application by Visual Studio and Azure using a hook that lets them place code into the header and body tags, so we no longer need to manually wire up the ApplicationInsights functionality. This meant I could strip the registration of the service and also remove some code in our razor layout which was adding the javascript for ApplicationInsights.

From out ConfigureServices method I removed:

services.AddApplicationInsightsTelemetry(Configuration);

From our _ViewImports.cshtml file I removed

@inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet

From the head section of our_Layout.cshtml file I removed

@Html.Raw(JavaScriptSnippet.FullScript)

Partial Success!

At this point the code was able to compile but I hit some runtime errors when calling context.Database.Migrate in our Configure method:

“Both relationships between ‘CampaignContact.Contact’ and ‘Contact’ and between ‘CampaignContact’ and ‘Contact.CampaignContacts’ could use {‘ContactId’} as the foreign key. To resolve this configure the foreign key properties explicitly on at least one of the relationships.”

And

“Both relationships between ‘OrganizationContact.Contact’ and ‘Contact’ and between ‘OrganizationContact’ and ‘Contact.OrganizationContacts’ could use {‘ContactId’} as the foreign key. To resolve this configure the foreign key properties explicitly on at least one of the relationships.”

To solved these issues I updated our DbContext fluent configuration in OnModelCreating to explicitly define the relationships and foreign key.

From:

To:

This got me a step further but I then hit the following error:

System.Data.SqlClient.SqlException: ‘The name “Unknown” is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.’

I tracked this down to a migration which sets a default value on an integer column using an enum. I found that I needed to explicitly cast the enum to int to make this migration work as expected.

Another step forward; but I still ran into issues. The next error I received was System.ObjectDisposedException: ‘Cannot access a disposed object.’ from Startup.cs when calling await SampleData.CreateAdminUser();

This was caused by a naughty use of async void for the Configure method. I removed the async keyword and used GetAwaiter().GetResult() instead since async void is not a good idea!

By this point I was really hoping I was getting somewhere. However next I had some odd issues with our TagHelpers. We have two tag helpers used to aid some datetime functionality. The errors I was seeing seemed to be due to the TagHelpers getting invoked for the head and body elements of the page. I’ve yet to spend enough time to track down what causes this so have applied workarounds for now.

On our TimeZoneNameTagHelper we were getting a null object error when this tried to apply for the head tag. We expect a TimeZoneId to be supplied via an attribute which was not present on the head tag and so this resulted in null TimeZoneId when we tried to use it to lookup the time zone with FindSystemTimeZoneById. The temporary fix in this case was to check the TimeZoneId for null and just returning if so.

With our TimeTagHelper I had to do an explicit check within the Process method to ensure the TagName matched “time”. This avoided it being applied for the head and body tags. I have created follow-up issues to try to understand this behaviour.

With these changes in place, the code was finally compiling and running. Yay!

Upgrading the Test Project

With the main web application working I was ready to focus on upgrading the test project and making it compile (and seeing if the tests would pass!) The first step here was updating the project file to target netcoreapp2.0 as I had done with the web project. I also updated some of the dependencies to the latest stable versions. This was partially required in order to restore packages and it also made sense to do it at this point since I already had a lot of changes to include. Some of our dependencies were still old pre RTM packages. I also took the chance to clean out some unnecessary nodes in the project file.

With the packages restoring, attempting a build at this stage left me with 134 build errors! Some as a result of changes to Identity, some due to upgrading the dependencies and some due to code fixes made to the main project as a result of the migration.

The first broken tests I focused on where any that had broken due to the Identity changes. These were relatively quick to update such as fixing changes namespaces.

to

I then had a number of tests which were broken due to a change in Moq, the library we use for mocking objects in our tests. When setting up methods of mocked objects we could previously return a null quite simply passing null as the parameter to ReturnsAsync. However there is now another extension method also accepting a single parameter and the compiler is not sure which one we are intending to use. So this now requires that we explicitly cast this as a null of the correct type to indicate we are passing the expected value and not a delegate which returns the value. This resulted in me having to update 46 tests.

The remainder of build failures were mostly caused by changing the number of parameters for the AccountController constructor so our tests which were creating one as the subject under test needed to be updated also to match the correct number of parameters.

At this point I had compiling test code and I was then able to run my tests! Oh, 101 failed tests!

When I looked a little deeper I noticed these were nearly all tests which used our InMemoryContextTest abstract base class which includes a registered instance of an InMemory DbContext on an IServiceProvider. With a bit of trial and error I realised that my queries were not returning any results, where previously they had in 1.o. When I experimented I found that it was in cases where our query called Include to eager load some of the related entities. However, our seed data for the test which populated the InMemory database for each test had not set those related entities. The InMemory provider does not enforce referential integrity and so there are no errors thrown when saving objects with missing required navigational properties.

In 1.x the query behaviour worked under this scenario but in 2.0 something had changed. I raised an issue about this one and the EF team responded quickly with… “The reason for the behaviour change is that now include is using navigation rewrite logic to construct the queries (whereas before we manually crafted include statements). Navigation rewrite produces INNER JOIN pattern for required relationships and LEFT JOIN pattern for optional. Before we would always hand-craft LEFT JOIN pattern, regardless of the relationship requiredness between child and parent.”

To correct for this I needed to ensure our test setups added the required related entities so that they would be returned from the queries as expected. In our actual code, running using the SqlProvider this is not an issue since the saves enforce the referential integrity.

With the tests fixed up I was finally at a point where everything compiled, ran and the tests were passing. I considered this a good place and was able to submit my PR to get the allReady project to 2.0 which was promptly merged in.

Summary

For the most part the migration documentation provided by Microsoft where very good and covered many of the things I actually experienced. In a few cases I found little extra things I needed to solve. For the most part the issues were around the tests and the EF changes probably took longest to isolate and then fix-up. It’s great to have been able to help move the project forward and get it to 2.0 very soon after release. It’s a great reference project for developers wanting to view (and hopefully work on) a real-world ASP.NET Core 2.0 solution. Hopefully my experience will help others during their migrations.

Migration

Migrating from project.json to csproj using Visual Studio 2017 Moving a real world ASP.NET Core application using VS2015 project.json to VS2017 and csproj

This past weekend I spent a few hours working on migrating the Humanitarian Toolbox allReady project over to the new csproj based Visual Studio 2017 solution format. I’ve written a few times about the plans to retire the project.json file introduced by ASP.NET Core. If you want some background you can read those posts first (don’t worry, I’ll wait)…

The summary is that in Visual Studio 2017 and the final ASP.NET Core SDK tooling, the project.json file was removed in favour of a revised csproj file. This has left some people confused and wondering why project.json is missing in Visual Studio 2017. The decision was mostly made to support customers reliant on using MsBuild and to support migrating larger projects to .NET Core.

Moving from project.json to csproj

In this post I wanted to cover the steps I took to migrate a real ASP.NET Core application to Visual Studio 2017. I’ve recorded the steps that I took and the issues I faced, along with any solutions I was able to find.

TL;DR; It wasn’t as painful as I’d expected it might be, but it also wasn’t entirely automated. There were a few issues along the way, which required some intervention on my part to finally get everything building correctly.

My first trial attempt at this process was actually about 2 weeks ago. At that time I was using the first RTM version of VS 2017 and I started by simply opening our allReady solution file with Visual Studio 2017. 

Visual Studio presents a dialog showing that any project.json projects will be migrated.

Visual Studio 2017 project.json migration dialog

After accepting that message VS will do it’s thing. Under the covers it’s calling the dotnet migrate command which handles the conversion for you. I decided to use Visual Studio 2017 for the migration, rather than the command line as I expect that’s the approach most people will be taking.

During migration; which took only a few seconds for me, you’ll see a progress window from Visual Studio.

ASP.NET Core project.json migration progress

Once the migration is completed you are shown a migration report.

project.json to csproj migration report

This initially led me to believe that things had gone pretty well. In fact, when I tried a restore and build things were not quite so positive. I was shown a staggering 982 errors and 3 warnings. It was late and I was tired, so I decided to abandon attempt one at that point! That was a problem for my future self!

Attempt Two

This weekend I was finally in a position to have another try at the migration. I took the latest version of our code and before starting, updated Visual Studio 2017 because I’d seen that it had some updates pending.

I repeated the initial steps as before and this time, although the result was still not looking great, it was a fair amount improved on attempt 1. Lesson 1 – Always do migrations with the most updated version of the IDE! This time I was at 199 errors and 2 warnings.

project.json Migration Errors

The errors were mostly relating to dependencies so it looked to be a restore issue. I tried cleaning the solution, restarting Visual Studio and even manually deleting the bin and obj folders. Each time I was hit with a wall of red errors when trying to build.

At this point I jumped onto Google and found a suggestion to clear the local nuget cache. I navigated to %USERPROFILE%/.nuget and deleted the contents.

Upon doing that, things were starting to look a little better. The errors cleared but I now had some warnings to deal with and the site would not run.

The second warning was a little strange.

The referenced project ‘..\..\Backup\Web-App\AllReady\AllReady.csproj’ does not exist.

The unit test project was referencing the web project but using the backup folder path. The backup folder is created during migration to hold the original project.json and xproj files in case you need to get back to a prior state. That folder of course didn’t include a csproj file. I removed the reference and re-added it, pointing it to the correct folder.

I also took the chance to move the backup folder so that it was outside of my main solution structure, just in case it was causing or masking any other issues.

Attempting to build at this stage reduced my errors but I still had 29 warnings. To be sure that Visual Studio 2017 wasn’t partly to blame, I also tried using the command line dotnet restore commands directly. However, I was getting warnings there also.

Inside Visual Studio the warning looked like this:

Detected package downgrade: Microsoft.Extensions.Configuration.Abstractions from 1.1.0 to 1.0.2

Trying to run the application at this stage resulted in the following exception:

System.IO.FileLoadException: ‘Could not load file or assembly ‘Microsoft.Extensions.Configuration.Abstractions, Version 1.1.0.0 … The located assembly’s manifest definition does not match the assembly reference.

For some reason the migration had chosen version 1.1.0 for the Microsoft.VisualStudio.Web.BrowserLink dependency when it created the csproj file.

<PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="1.1.0" />

The solution was to change it to 1.0.1 since allReady is targeting the LTS stream of ASP.NET Core currently. All of the other dependencies were registered with their LTS versions. This rouge 1.1.x dependency was therefore causing version conflicts.

<PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="1.0.1" />

At this point, building the solution was getting a bit further. I now had a single warning left for the web project. This warning stated that AddUserSecrets(IConfigurationBuilder) is now an obsolete method.

‘ConfigurationExtensions.AddUserSecrets(IConfigurationBuilder)’ is obselete. ‘This method is obsolete and will be removed in a future version. The recommended alternative is .AddUserSecrets(string userSercretsId) or .AddUserSecrets<TStartup>()..’

The warning explains that we were now expected to pass in the user secrets key as a string to the method or use .AddUserSecrets<TStartup> which is syntactic sugar over AddUserSecrets(this IConfigurationBuilder configuration, Assembly assembly). Previously the storage location for the user secrets ID was the project.json file. It’s now moved to an assembly attribute. The previous overload is not guaranteed to return the correct Assembly so this could produce issues. You can read a fuller explanation from the team on the GitHub repo.

In our case I changed the appropriate line in Startup.cs from

builder.AddUserSecrets();

to

builder.AddUserSecrets("aspnet5-AllReady-468aac76-4430-43e6-848e-f4a3b90d61d0");

Test Project Changes

At this point the web project was building but there were issues inside the test project:

Test project warning after project.json migration

Found conflicts between different versions of the same dependent assembly that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.

Some of the dependencies we had been using had newer versions so I took a guess that perhaps I needed to update the Xunit / test related ones. I changed these three dependencies:

<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0-preview-20170106-08" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.2.0-beta5-build1225" />
<PackageReference Include="xunit" Version="2.2.0-beta5-build3474" />

to

<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
<PackageReference Include="xunit" Version="2.2.0" />

Finally the entire solution was building, tests were running and I could start the website locally.

Initial thoughts on csproj and Visual Studio 2017

Generally, everything seems to work once inside Visual Studio 2017. The migration process was a little more effort than I would have liked though.

No auto complete

In Visual Studio 2015 we had version auto complete for the dependencies from NuGet when editing project.json. We’ve lost this in Visual Studio 2017 which is a shame. It now pretty much forces me into the NuGet Package Manager which I find slower. I understand that autocomplete might come back at some point, but it’s a shame we’re lacking it upon release. 

Less easy to scan

Personally, I still prefer the project.json format for readability. There’s still a little more noise in the XML than we had inside the JSON version. That said, the work done to the revised csproj format is great and it’s a vast improvement on the csproj of the past.

Summing Up

Overall the process wasn’t too harrowing. Visual Studio and dotnet migrate handled most of the work for me. It was a shame that I had to resort on a few occasions to Google for manual solutions to solve problems that I’m sure should have been addressed by the tooling. The restore of the dependencies not working without clearing the cache was an unexpected problem.

But with the work of migration behind me I’m ready to move on. I’ve reached the acceptance stage on my project.json grieving process!

Anyone new to ASP.NET Core, using it for the first time in Visual Studio 2017 will probably look at older blog posts and wonder what we were all moaning about. The csproj is a great improvement on what we had in the past, so if you’ve never seen project.json, there’s nothing to miss and only improvements to enjoy.

Migrating from .NET Framework to .NET Core The journey of re-targeting an ASP.NET Core application onto .NET Core

What better way to start the new year than with some coding? In my case I’d found a bit of time during the end of my Christmas vacation to tackle an issue on allReady I’d been wanting to work on for a while. Before getting into the issue, if you want to read more about what allReady is you can read my earlier blog post where I cover contributing to it in greater detail.

allReady was first created during the early beta’s of ASP.NET Core (at which point it was known as ASP.NET 5). At that time, due to some dependencies which did not yet support .NET Core, the application was built on top of the full .NET framework 4.6.

The first thing to discuss briefly is why and how we can run ASP.NET Core on the full traditional .NET framework. While it might be reasonable to assume that ASP.NET Core naturally needs to run on the new .NET Core framework, remember that .NET Core ultimately includes a subset of the larger full .NET framework API. Because of this, it is perfectly possible to run ASP.NET Core on the original 4.x version of the .NET framework. This is an advantage to those that want to make use of the new features and optimizations within ASP.NET Core, while also still utilising the libraries they know and love. This is a perfectly reasonable way to run ASP.NET Core but with an important limitation. By targeting the .NET framework you are not able to develop or host the ASP.NET application outside of Windows.

For the allReady project this had started to become a bit of a limitation. We had Mac users who were keen to contribute to the project, but were unable to do so without installing and running a Windows VM on their device. As we were getting to the end of some complex requirements for allReady’s v1 milestone, I took the opportunity to spend some time looking at retargeting the application to run on .NET Core. This turned out to be a bit more involved that I had first anticipated. However, we have succeeded and the code base has been proven to build and run on both Mac and Linux devices.

Updating the Solution

The first part of the process was to update the project.json file inside the web solution. The main update here was changing the framework specification to netcoreapp from net46.

project.json before:

project.json after:

We had decided as a team to continue to track the .NET Core 1.0.x LTS support stream. In short, this is a stable version of the framework, under full Microsoft support. New minor patch releases are appearing to fix major bugs or security issues, but otherwise the changes are expected to be quite limited. Our project was already targeting the latest 1.0.3 SDK and latest 1.0.x libraries for each of the components.

You’ll see above that as well as changing the framework moniker I’ve added an imports section as well. This was required in our case as we have some dependencies on some libraries which don’t yet specify a dotnet Target Moniker or .NET Standard version. It’s essentially there for backwards compatibility until the ecosystem settles down.

Handling Dependencies

Upon making the required changes to project.json, it became apparent that some of the packages we depend on could no longer be restored. The first thing I did was to use Nuget Package Manager to update all of the dependencies to their latest versions to see if any of those had been updated to support .NET Core. This helped in some of the cases but still left me with a few that were not compatible. For the time being I commented those out as well as commenting out any code using those dependencies. In our case this included the following main libraries we had issues with…

  • XZing which was being used in one place to generate a QR code image for use within the allReady phone application. This had its own dependency on System.Drawing which is not part of .NET Core.
  • LinqToTwitter which was being used in one area of the application to query for a user’s information after a sign in via Twitter.
  • GeoCoding.net which was used in a few places in order to take an address and geo code it to latitude and longitude coordinates.

With those taken care of for now (if only commenting out code was considered “fixing”), the last remaining issue for the web project was a dependency on a separate library project in our solution which contains some shared models. This library is used by both our web application and also in some Azure web jobs we have defined. As a result it needed to support both .NET Core and the .NET Framework. The solution here was to convert it to a .NET standard library. In my case I chose the lowest standard version I could find which gave me the API surface I needed, while supporting my dependent projects. This was the .NET Standard 1.3 version. The project.json for this library was changed as follows.

Before:

After:

With this done, I also had to update our test library project.json in a similar way in order to support .NET Core by changing the frameworks section and updating to the latest versions of our dependencies as with the main web project. The changes in this file were very similar to what I’ve already shown, so I won’t repeat the code here.

Wrapping missing dependencies

With the package restore now succeeding for the web project, test project and our .NET standard project my next goal was to find solutions to the libraries we could no longer use.

XZing was an easy decision as currently the phone application project is a bit stale and it was decided that we could simply remove the QRCode endpoint and this dependency as a result.

When I reviewed the use of LinqToTwitter it was only really abstracting one API call to Twitter behind a more fluent Linq syntax. We were using it to request a Twitter user’s profile information. The option I chose here was to write our own small service to wrap the call to the Twitter REST API which using HttpClient. There are some hoops to jump through in order to establish the authentication with Twitter but this ultimately didn’t prove too difficult. I won’t go into the exact details here since this post is intended to cover the general “migration” process.

The final dependent library that wasn’t able to support .NET Core was GeoCoding.net. We were using this library to perform address to coordinate lookups via Google. On review of the usages, it was again clear that we had a whole dependency for what amounted to a single call to a Google endpoint. We were already calling another part of the Google REST API directly from a custom service class, so again, the decision was to wrap up the geocoding functionality in our own code, calling the API directly. Again, the exact details are beyond my scope here, but I may cover this in a future blog post if there’s interest.

Handling API Differences

With the problem packages removed I tried to compile the solution. I was immediately hit by a wall of errors. This was initially a little daunting, but after looking through them I could see that many were quite similar and due to changes to the .NET Core API surface.

Reflection

The first of which was where we’d been using reflection. We have a couple of places in the web app that rely on reflection for wiring up our IoC container and also in some of the test classes. In .NET Core the Object.GetType() method no longer returns full type detail information, instead it returns just the type name. In order to access the full type information there is a new extension method called GetTypeInfo(). It returns TypeInfo which is similar to what Type used to return.

Before:

After:

In this example you can see that the only change is adding in the GetTypeInfo call.

Date Formatting

We also had some code calling ToLongDateString on dates. This is no longer avaiable so I switched to formatting the date using the ToString method instead.

Before:

After:

Exceptions

In a couple of places we were throwing ApplicationException which also seems to have been removed. I switched to a general Exception in this case.

Before:

After:

Final Tweaks

Once all of the compile errors were fixed we were nearly there. In order to enable the site to be run under the non IIS mode of Kestrel I updated the profiles in launchSettings.json

Before:

After:

This new profile basically tell VS to run the project (i.e. via dotnet.exe) – launching it on port 5000. Within Visual Studio you can choose whether to start via IIS Express or directly into the project via the dotnet.exe. New projects targeting .NET core include these two profiles automatically, so updating it in our project felt sensible, to make the developer experience familiar.

Conclusion

With the above steps completed the project was now compiling, tests were running and the site worked as expected. Quite a few hours of work went into the process. The longest time was spent building the new service wrappers around the 3rd party REST APIs so that I could get rid of incompatible dependencies. The actual project and code changes weren’t too painful and it was just a case of working out the API changes so I knew how to modify our code.

It was an interesting experience and once we had it tested on a Mac and Linux during the NDC London code-a-thon it was very exciting to see it working. This has made it much easier for non-Windows developers to contribute to the allReady project and is just one of the great new benefits of working with .NET Core and ASP.NET Core.

Updating an ASP.NET Core Site to the December 2016 Release How to upgrade a site on the LTS 1.0.3 version of ASP.NET Core

I run into an issue this week during what should have been a simple ASP.NET Core application update. I wanted to share my experience in case others run into similar problems. Also, I’m sure to be back here myself to remember this in the future!

On December 13th Microsoft released their second minor patch release for the LTS (Long Term Support) track of .NET Core. ASP.NET Core releases on two tracks depending on how cutting edge you want to be. LTS is the “safer” track, which will be supported and bug fixed during the support lifespan. The other track is FTS (Fast Track Support) which will be where new features appear. You can read more about this on the Microsoft Blog.

As you may be aware from reading my other posts, I’m contributing to an opensource charity project called allReady. We’re currently using the LTS track packages and at the time of writing still targeting the full .NET framework (as opposed to .NET Core). We had applied the last patch release 1.0.1 packages in September without any major problems so I was hoping for the same experience with this patch release.

The details for the release were made available in this Microsoft blog post. If you follow the links to the release notes you will see that the ASP.NET Core updates are considered version 1.0.3. This is where the versioning starts to get a little murky in my opinion. ASP.NET Core itself has a version number (now 1.0.3) which tracks general “releases” of the framework. However, the individual packages that actually make up .NET Core and ASP.NET Core also have version numbers and revisions. Those numbers don’t track with the main release version, so it starts to get a bit confusing. You won’t for example find a package for Microsoft.AspNetCore.Mvc at version 1.0.3. The latest for that package is 1.0.2.

I’ll now step through how I upgraded our project and then discuss the issue I experienced with the EF commands for entity framework. Before starting to update the project I made sure to install the latest version of the 1.0.3 SDK from the Microsoft website.

Update Package.json

This is where the first pain point came for me. It wasn’t listed specifically in the blogs posts or release notes all of the package which had updated and what the latest package versions were. So my initial solution was to turn to the VS Nuget Package Manager where I was hoping I could simply update all of the Microsoft packages to the latest versions. However, since the package manager lists the latest (non pre-release) versions, it was offering me the FTS 1.1.x versions. So a simple, upgrade all option was out of the question.

Next I went into the project.json manually planning to update each package by hand, allowing autocomplete to give me the latest versions. However autocomplete didn’t always seem to pick up the latest version number for me automatically and I was worried about missing something. So I reverted back to the Nuget Package Manager and went one by one through the Microsoft packages. I used the install dropdown to select the newest LTS version 1.0.x for each one. This was slow and manual but at least meant I knew what options I had and could be explicit in choosing the latest version i wanted.

Here’s a rundown the packages from our project.json that I needed to update and the versions number they are are now on (which should be the latest LTS release). Note that our project.json may well differ for newly generated projects so you may not have all of these packages and you may even have dependencies listed that we do not.

"Microsoft.EntityFrameworkCore.SqlServer": "1.0.2",
"Microsoft.EntityFrameworkCore": "1.0.2",
"Microsoft.ApplicationInsights.AspNetCore": "1.0.2",
"Microsoft.AspNetCore.Mvc": "1.0.2",
"Microsoft.AspNetCore.Mvc.TagHelpers": "1.0.2",
"Microsoft.AspNetCore.Authentication.Cookies": "1.0.1",
"Microsoft.AspNetCore.Authentication.Facebook": "1.0.1",
"Microsoft.AspNetCore.Authentication.Google": "1.0.1",
"Microsoft.AspNetCore.Authentication.MicrosoftAccount": "1.0.1",
"Microsoft.AspNetCore.Authentication.Twitter": "1.0.1",
"Microsoft.AspNetCore.Diagnostics": "1.0.1",
"Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Server.IISIntegration": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.2",
"Microsoft.AspNetCore.StaticFiles": "1.0.1",
"Microsoft.AspNetCore.Cors": "1.0.1",
"Microsoft.Extensions.Configuration.Abstractions": "1.0.1",
"Microsoft.Extensions.Configuration.Json": "1.0.1",
"Microsoft.Extensions.Configuration.UserSecrets": "1.0.1",
"Microsoft.Extensions.Logging": "1.0.1",
"Microsoft.Extensions.Logging.Console": "1.0.1",
"Microsoft.Extensions.DependencyInjection.Abstractions": "1.0.1",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.1",
"Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.1",
"Microsoft.AspNetCore.Mvc.WebApiCompatShim": "1.0.2",
"Microsoft.Extensions.Logging.Debug": "1.0.1",

I also had to update the dependencies for some of these in our test library, so remember to check there too.

To complete the upgrade I also adjusted the SDK version in our solution’s global.json as follows…

"sdk": {
  "version": "1.0.0-preview2-003156",
  "runtime": "clr",
  "architecture": "x86"
},

With the above changes made I was able to build and run our site via Visual Studio. Great!

It wasn’t until a couple of days later when I hit an issue. During an issue I was working on I’d updated our model classes Entity Framework and needed to build my next entity framework migration. I did this by running the usual command…

dotnet ef migrations add AddNotifications

After building the project I was faced with the following error:

Could not load file or assembly 'Microsoft.EntityFrameworkCore, Version=1.0.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

At this stage I tried a few things, none of which fixed the problem outright, although they may have contributed to the overall solution. Firstly I cleaned my solution and rebuilt, no joy. Then I wondered if Nuget had cached any incorrect versions of the package, so I cleared my local Nuget cache and tried restoring my project dependencies again. Still no joy! Finally I hopped onto the ASP.NET Core Slack channel and sought help there. It was with huge thanks to Chad Tolkien then he suggested a manual deletion of my bin and obj folders within the project. I did that and rebuilt the solution. Success! Finally I was able to generate a migration using the EF CLI tooling. So it seems the clean and restore steps previously hadn’t cleaned everything they needed to.

I’d love to know if there’s a better way to manage these updates currently? I’m hoping that with the final tooling release and VS 2017 things will get easier. It would be useful for example, to be able to choose which track you want to use within Nuget Package Manager. I’m not sure how that would be achieved exactly but it would distinguish the packages you really want to get the latest within your chosen support track. It would also be handy if Microsoft blog posts about each release include specific details of each updated package and it’s latest version number. Having a quick reference when updating dependencies would have made my life a little easier. There are some release notes which hint at the main packages and their new version number, but it didn’t include all components that I ended up changing.

Developers working hard during the code-a-thon

Running a Humanitarian Toolbox Code-A-Thon

I’ve been involved with the Humanitarian Toolbox allReady project for about one year now. I’ve really enjoyed being able to contribute to the cause, while learning plenty along the way. Contributing has made me a better programmer and exposed me to new libraries and techniques which I am already benefiting from in my day job and in other projects.

During my time as a contributor I’ve be aware of the code-a-thons which have taken place in the US, Canada and more recently Europe. They’ve always seemed like great events and I’m excited to be signed up for the 2-day code-a-thon in London next month. A few months ago I discussed my experience with allReady and my wiliness to attend the London event with the management at my employer, Madgex Ltd. They were very receptive and were keen to support the cause. In fact, they are now funding 4 developers (including myself) to attend the full two days in London. Madgex are covering the transportation costs and hotel accommodation as well as loosing 4 developers for 2 days so this is a generous contribution.

In addition to the NDC event, we had discussions about arranging a code-a-thon at our office in Brighton, one evening after work. I was really keen to get something planned in and allow other developers to be able to contribute and experience allReady. With the concept green lighted by senior management, we set about making the idea a reality.

Planning

The starting point was to arrange a 30 minute presentation about Humanitarian Toolbox and allReady to gauge interest and demonstrate the application. I prepared a set of slides and a short demo which I presented in our board room one lunchtime. We had a good turnout of about 16 people in the room and it planted the seed with a few of the attendees, who talked to me afterwards about getting involved.

With the awareness raised, I followed up with some emails to the Madgex staff to further determine if people would be willing to join an event locally. I got some positive indications from enough people, so we picked a date and the invites went out to the staff. We planned our initial event for one evening after work. We agreed that 3 hours would be most practical given that people had already done a full day of work. Over the next few weeks I continued to promote the idea and started to confirm attendees for the evening. We had 7 or 8 people showing good interest as we closed in on the planned date. This was all supported by our developer lead at Madgex, Steve (great name!)

1 week to go

With a week to go we started to ramp up our planning activities. Steve (developer lead), helped with the physical space we needed and secured budget for some pizzas on the night (always a great lure for developers!).

I started combing through issues on GitHub that would be suitable for new contributors. We have a lot of work on-going with the project, pushing towards our v1 release but many of the issues that are left are quite complex and require a reasonable knowledge of the codebase. The balance was finding interesting and diverse issues, whilst ensuring that they wouldn’t require too much time up-front having to learn about the entire application. I had decided to try the new project feature inside GitHub to plan and organize the event. I created a few statuses and dropped issues into the “Not Started” category. The project feature is a basic Kanban board which allows cards to be dragged between statuses as the work progresses.

I also put together a pre-requisites list for the confirmed and tentative attendees, guiding them to ensure they had the latest tooling for .NET core installed, had forked and cloned the repository and were able to get it building. This is an important activity since it’s a little time consuming and we didn’t want to spend most of the time at the code-a-thon performing the setup work. As the big day approached we started to firm up numbers so we knew what cabling infrastructure we would need.

On the day

On the morning of the event, I did a final round up of the staff to confirm final attendees. After completing this we had about 6 locally able to attend as well as one of our developers from our North American office in Toronto, Canada. I started collecting GitHub usernames so that we could add people as contributors on the repository. This isn’t absolutely required, but aids in assigning issues to specific people. We also arranged to get people added to the Humanitarian Toolbox Slack channel so they could ask questions from our project experts and the Humanitarian Toolbox founders during the event.

At 4:30pm (with 30 mins to go) I moved into the meeting room we would be using for the evening. I wanted to get the webcam setup and ensure we had the cables for power and networking. Our fantastic systems technician Ricky had beaten me to it and already sorted the cabling requirements. We tested out the remote Skype link to Luke in Canada and made sure we had everything ready. A big thanks to Ricky for volunteering his time to provide some tech support and make sure we got up an running so smoothly.

Humanitarian Toolbox Code-A-Thon Sign
You must have a nice sign when running a code-a-thon!

At 5pm our team of developers migrated like birds in the winter, to the meeting room. We had 5 developers joining me locally, plus Luke video conferencing with us from across the pond. We’d decided to relocate our desktop PCs into this common area so that it would be easier to support each other during the event. While this required a little time up-front to disconnect and reconnect the PCs, it proved a good move as I was able to answer questions, share information and demo things very easily for everyone.

Preparing for the event
Developers getting setup for the event

By about 5:20pm we were in good shape, the computers were moved, developers were settled, pizza orders had been taken (priorities people). We started with a brief Google Hangouts standup with the Humanitarian Toolbox team. James and Tony introduced the project, it’s goals and thanked everyone for taking the time at the end of their working day to stay on and code for the greater good. We also had one of the most regular project contributors Mike on the call, showing his support for the event. Being able to speak to the founders and project team made for a great start and really set the tone for a fun evening, supporting code to save lives.

It’s worth pointing out here a couple of things that James and Tony highlighted during the standup. Firstly, the initial use case for the application will be to aid the American Red Cross with the effort to install free smoke alarms in people’s homes. Already this initiative has helped save lives when disaster struck and a family’s home caught fire. Thanks to a smoke alarm installed by the Red Cross, the family were alerted to the fire and all able to evacuate safely. Also important to note is that for each hour of coding time spent on the application we can expect about 40 hours of volunteer time saved. That’s a huge return and really shows that even sparing a few hours of personal time can have a massive impact.

With the project introduced and the devs eager to get going we started the team off by finding issues people were keen to work on. Sarah, Roberto and Luke took on some unit tests, Patrick picked up a new feature requirement, while Chris dove in with some EF and migrations code for the first time. Mark one of our front end developers started looking at the homepage UI improvements. I floated around the room to answer questions, demonstrate the site functionality and to help with the GitHub flow. I really enjoyed watching new contributors getting up to speed with the code and being able to assist their learning as they progressed. It was good to be able to witness some of the common questions new contributors have so we can focus on lowering the barrier to entry in the future.

Developers working hard during the code-a-thon
Developers working hard during the code-a-thon

There were a lot of new things for everyone to learn and they did a fantastic job of absorbing the information and becoming productive quickly. All of the developers were new to GitHub and OpenSource, so there was learning to be done around the processes to ensure an up-to-date master branch, to manage rebasing and prepare pull requests. Most of the team were also experiencing ASP.NET Core for the first time, which in itself has a lot of new concepts to learn. It was also the first exposure to Entity Framework for everyone, so that had it’s own learning curve too. This really highlights another benefit of contributing to the project for developers. It’s a great learning experience that puts a real-world, production-ready code base in the hands of developers.

During the 3 hours everyone knuckled down and other than a brief break to load up our plates with some pizza, we worked solidly. I had hoped to use the GitHub project feature to keep track of things, but we hit some issues with people not being able to move the cards themselves. While I was able to do this, it proved more of a hindrance as my time was better spent helping people around the room. In the end we abandoned that feature and in hindsight a post-it-note board might have been easier to manage. I still like the concept of using GitHub so that others not physically at the event can monitor progress, but we need to find a way to allow contributors to manage the cards as they work on issues. I suspect it might just be a permissions thing, so I’ll investigate it soon.

By the end of the evening we had submitted two pull requests which were reviewed and merged into the project before we left. We also had three other issues very close to being completed which will be finished off in the coming days and hopefully submitted soon. Given the setup time, huge learning curve and relatively short coding period of three hours, I’m very pleased with this achievement. Everyone was amazed when we realised that we had hit the 8pm finish already. Time flew, which is a great sign and from feedback so far, people would have liked to have even longer with the code. Perhaps an all day event ison the cards for the future.

I really hope that everyone left feeling as positive and happy as I did. Certainly the sense I got was that everyone enjoyed learning some new things, getting to grips with the code and contributing to a good cause. I feel proud to be part of such a generous team of people who were able to join this code-a-thon after a full day of development at work first. Everyone should be very proud of what they managed to contribute. I’d love to run another session to continue the great start we’ve made and if people are willing, perhaps we can make it a regular thing or even look at a longer full day event.

From a personal perspective I enjoyed sharing what I’d learned during my time with the project and seeing other developers pick up the concepts for themselves. Prior to this, I’ve never been too keen on the idea of being a “teacher” and even presenting is not in my normal comfort zone, but I found a bit of a passion for instructing people. I’m a firm believer that by sharing information, it helps our own understanding as you are challenged to know enough to be able to articulate the concepts.

Code-a-Thon Team Photo
Code-a-Thon Team Photo (Sarah had escaped just prior to this being taken!)

Feedback

The feedback the day after has been extremely positive. I’m very happy to hear that people enjoyed themselves and had a positive and fun experience. It’s nice to spend time coding for fun, outside of the normal day-to-day work. Being able to put your skills to use towards such a positive concept is also very rewarding. From a quick survey afterwards, the team are keen to continue to contribute to the project and would like to take part in another code-a-thon in the future.

Thanks again to everyone who took part:

Our developers: Sarah, Patrick, Chris, Mark, Roberto and Luke
Tech support: Ricky
Planning and management: Steve K
Humanitarian Toolbox Support: James, Tony and Mike