Humanitarian Toolbox Codeathon .NET South East - Coding for the Greater Good

On Saturday (20th January) we held a special .NET South East event, spending the day ‘coding for the greater good’ on the Humanitarian Toolbox allReady project. We were very excited to be joined by Richard Campbell, one of the co-founders of Humanitarian Toolbox and co-host of the popular .NET Rocks Podcast. A team of 19 volunteers joined us to contribute towards the project during the day, all first time contributors to the project. For many, this was also their first time working on an open source project and GitHub.

Richard Campbell - Humanitarian Toolbox co-founder
Richard Campbell Introducing the Humanitarian Toolbox to our Team.

Planning

The possibility of running a codeathon came together quite recently and once I was able to arrange with Richard for him to join us, I kicked the planning into high gear. The first problem when trying to host an event like this is usually finding a suitable venue. In my case this was not a problem as Madgex, my employer, are very supportive of these events and were immediately open to the idea of hosting it. We have three neighbouring meeting rooms which can be opened up into one larger space. We do this for our monthly .NET South East meetups too and it creates a very reasonable working area.

We picked the date around Richard’s travel plans for NDC London. Richard and Carl were recording their shows at the event and Richard had a day spare following the conference which was ideal for hosting the codeathon. I created a meetup event to allow people to begin reserving their place at the codeathon. Initially I set a limit of 14 spaces until I’d had chance to fully assess the logistics of running the event. In the end we raised this to 18 spaces.

So we now had a date and a venue planned; the final  thing I started to put into place was food. I wanted to ensure that our contributors didn’t run out of steam too early in the day and as we all know, the best fuel for developers is pizza!! Madgex kindly agreed to pay for the pizzas and also for some post event catering to re-fuel before our regular meetup began.

Organising GitHub Issues

After the Christmas holidays I started to organise the issues in the project. allReady has evolved over the last few years into a quite large application. As a result, the complexity level for those starting out with the project has increased. When running a codeathon I am conscious that for some people, everything may be new and so I wanted to try to make the barrier of entry as low as possible. Part of this includes having a good range of smaller issues that can be easily tackled by first-timers.

Fortunately, one of the things we had recently been focusing on is the need to improve the user experience (UX) of the application. The application is very much functional, but not always intuitive for a new user. As a result of this I spoke to a friend of mine, Chris, a UX designer working in Brighton. We spent a couple of hours reviewing the application and during that session he found a number of small, but neccesary changes we could make to give us some quick win improvements to the UX.

After that meeting I came away with a lot of subtle changes that I was able to create issues for; perfect for the codeathon. These may not see very exciting on the surface but they will really help improve the usability of the application. They were also small enough that they can easily be tackled in one day as someone new to the project finds their feet. I hoped these could be good gateway issues before people started to tackle more complex feature requirements.

In the week leading up to the event I spent time organising the final logistics, such as ensuring we would have sufficient power connections for all of the laptops. I was also able to increase the RSVP limit as we wanted to make the most of the opportunity and allow as many members of our community to contribute. I also reached out to some local charities to see if they would be interested in attending the event to see how the application may be able to help with their own requirements.

I am very thankful to some of my colleagues at Madgex who were also helping to make sure we had everything in place ready for the weekend. This was particularly useful as I was at the NDC London conference the three days prior to the event so couldn’t be there in person to perform any last minute preparations.

Codeathon

On the day of the codeathon I set off by train to Brighton. Once there, my first stop was to pick up Richard from his hotel. Dan, the organiser of .NET Oxford was also staying at the same hotel and would be joining us for the event. It was great to finally meet Dan in person after many months of chatting via Twitter about running our user groups.

Together, the three of us set off for the Madgex office. One of my colleagues, Chris was already there and had been helping out by letting the early arrivals into the building.

The next 45 minutes to an hour were spent organising the meeting room space. Again, my colleagues had been very helpful, having set up a few things the day before. We just needed to ensure we had the suitable power points for the numerous devices we’d be running on the day. Ricky, our IT support technician at Madgex had kindly come in on his weekend to help with the setup and to be on hand for any technical challenges.

The setup went very smoothly and soon we had most of volunteers for the day settled in. Nearly everyone had been able to get the project cloned onto their laptops and tested prior to the event. This is extremely useful and meant that we were ready to start the event and begin coding with very little delay. At codeathons this really does help with the productivity as we can focus on code, rather than machine preparation.

At about 9:20am we were all ready to begin. We started with a short introduction from Richard who shared the history behind Humanitarian Toolbox and the goals of the allReady project we would be working on. Our volunteers listened with rapt attention and it was fantastic to have Richard with us to share his passion for the charity he has founded.

Humanitarian Toolbox Codeathon introduction

After Richard finished his introduction, I spent a few minutes speaking about the technical stack and the basic flow for working on issues and submitting pull requests. I find it useful to review this flow before starting, especially when we have some first time open source contributors in the room. In hindsight I probably needed to mention a few other things to make it easier for people to get going which I’ll include in my slide deck for future events.

With the introductions complete, we commenced coding. Everyone was heads down very quickly, choosing issues to work on and producing code to address them. As expected, there were quite a few questions along the way and I hope I was able to help everyone get started without too much of a delay. There’s a lot to take on and learn at an event like this and I thank everyone for being very patient as I worked through the various questions.

In under 1 hour we had had our first pull request to the project and after that the flood gates opened and more streamed in. I did my best to keep up with the requests, performing code reviews before merging them into the project. Our AppVeyor account was struggling under the load a little. Each pull request to the project triggers a build on the AppVeyor system. This is very useful to be able to verify that the code included in the PR builds and that the tests all pass. As we had many PR’s coming in concurrently, it did start to creak at the seams a little. Worth noting for future events to see if we can increase the capacity of the account.

We had a brief lunchtime break for pizza at around 12:45pm, which was a good chance for people to break from their screens and chat about the progress so far. The morning had flown past at a great rate and already the team had achieved a great deal. The team were eager to get going again and before long were back at their laptops, working on the next set of pull requests!

Codeathon contributors in action

During the afternoon we had organised a series of User Experience (UX) user testing sessions. We are conscious that the project has been developed mostly by backend developers and as a result, while functional, the User Interface (UI) and UX of the project leave something to be desired. This is now a focus on the project to see what we can change and improve to make it as easy to use as possible. A big thanks to my friend Chris for joining us to run the user testing sessions and to our willing test subjects, Jenny, Zen and my wife Rhiannon. It proved really useful and we now have a number of good suggestions from Chris for changes that we can make to resolve some difficulties identified during the testing.

The afternoon seemed to go even faster than the morning, with more PRs being made as people became more familiar with the project and the workflow. By 5pm we had made significant progress and it was time to wrap up the event. We concluded with some sandwiches provided from a local caterer which were very welcome after our busy afternoon.

Achievements

Including myself and Richard we had 17 people working on the project all day, and a further 4 contributors for the additional work being done in the afternoon to perform the UX testing. This was a really great turnout and it meant we could get through a lot of work in a relatively short space of time. I can’t thank everyone enough for coming along and helping to make the day such a success. On reflection, the number we had was just about right. Any more would have been harder for me to support without delaying people.

In total we had 30 pull requests opened during the event. That’s thirty issues within the project being addressed and fixed which is pretty incredible. That may be close to a record for a single day Humanitarian Toolbox event! I was able to review and merge 18 of those during the day as well, which means that code is already active in the project. I will endeavour to get through the reviews of the remaining 12 PRs as soon as possible.

How can you help?

If you like the sound of this project and this style of event, we’d love for people to join us in contributing to allReady. For those that took part in the codeathon, we hope many will continue contributing to the project too. During the next couple of months there is a global Microsoft MVP (Most Valuable Professional) event running which is a virtual codeathon to get Microsoft MVPs from around the world contributing to the project. I am helping to organise and run that event and hope to see lots of activity on the project leading up to the Global MVP summit event in March.

This is a great time for newcomers to join the project as we have lots of experts on hand to support you and help you get started. The best place to start is to visit the GitHub project repository. From there you can view the open issues and jump in wherever you feel comfortable. If you need support, just let us know and we can help out.

I have started a series of videos explaining how you can get started with open source contributions and showing the technical steps. You can view these on my YouTube channel.

Summary

I’d like to wrap up with another huge thank you to everyone who helped in some way with organising this event and especially to those contributors who took part during the day. It was a great showing from the community and I hope everyone enjoyed the day as much as I did. The aim of the event was to introduce people to the project and get them past the learning curve for contributing to an open source project. A big thanks to Madgex for supporting the event with the use of their meeting rooms, as well as for providing some food to keep the troops fuelled up. In my opinion it was a huge success and I hope that we can arrange future events to continue the good work from everyone who contributed.

Working on Your First GitHub Issue Contributing to open source projects (Part 2)

In this post we’ll look the steps that you can take as a new contributor to open source, in order to find and work on your first contribution to an issue on GitHub. As with the first post (Forking and Cloning from GitHub), we’ll use the Humanitarian Toolbox allReady project as our example.

Choosing An Issue

The first step when contributing to a project is to visit the project site and find an issue you would like to work on and which you think is suitable for your skill set. From the project homepage on GitHub you can click the Issues tab to navigate to a list of the open issues.

GitHub navigating to the issues

As a first time contributor you will ideally want to find something small and relatively straightforward to use as a nice entry into the project, before trying to tackle larger more complex issues. Don’t try to dive in too deep on your first few contributions!

Many projects will label their issues and this is often a good way to filter down the issues list to ones that you might want to work on. On the allReady project we have a “good first issue” label and we also use the “upforgrabs” label convention. “upforgrabs” is a label projects can use to highlight available issues, usually ones which are good for new contributors. There is a master site which scrapes these and provides a way to find issues that you can contribute to across many projects on GitHub.

To view a list of the available labels you can click on the “Labels” button in the UI

GitHub label filter button

From the labels view, you can scroll and find a label to work on.

From this list for allReady, “good first issue” sounds like a reasonable candidate for newcomers so we’ll click on that. This will result in a filtered view of issues, showing only those which have this label applied to them.

GitHub issues filtered by label

In this example, there is one issue now showing. We can click on that issue to view more detail and to determine if it’s something we’d like to work on.

GitHub Issue Details

The issue details page provides the full information about the issue. Usually the top comment will include details of the bug or the feature that is needed. Issues can be raised by anyone and as a result, the level of detail may not always be sufficient to understand the problem or requirement. On allReady, the project owners and core contributors try to view new issues and triage them. This involves verifying the issue being reported is valid and where necessary, providing some further details or guidance. If it’s not clear what is needed from an issue, you can leave a comment to ask questions about it. If you have an idea for a solution, but want to run it past the project team before starting, work, you can leave a comment for that too. Issues are a good place for open discussions like this.

In this example, the requirement is quite clear and the solution should be very simple; we’d like to have a go at working on this issue. It’s good practice and etiquette to leave a comment an any issues you plan to work on so that other people know it’s no longer available. Having two people spending their time on the same issue can be frustrating. It’s also worth pointing out that you should check for any comments indicating that someone else is already working on an issue before you pick it up yourself!

Leaving a comment on GitHub

Working on an Issue

When beginning work on an issue locally, the first thing you’ll need to do is to create a branch for that piece of work. There are many Git UI tools that allow you to create a branch, for this demo we’ll use the command line. To create and checkout a branch you can use a single command.

git checkout -b <branchname>

This command allows us to specify a name for our new branch and immediately check it out so we can work on it. The naming convention you use for your branches is up to you. They will live in your clone and fork of the project. Bare in mind, once pushed to your public fork and when you later submit a pull request, they will be public. I tend to use the issue number for my branch names. Personally I find this works quite well since the branch names are short and I can easily lookup the issue details on GitHub to see the requirements. In this case, the issue we’ve selected is issue #2204 so I’ll use that for my new branch name.

git checkout -b 2204

Checkout a Git Branch

Once we are on our new branch we can make changes to the code which address the issue. I won’t show that here, but for this issue I opened up the markdown file and made the appropriate fix to remove the duplicated text using VS Code. You can use any tools you like at this stage to work on the code and files.

Once we have made the required changes that address a particular issue, we need to commit that code to our branch. We can use the “git status” command to view the changes since our last commit.

Git status modified

In our example above, only one file has changed. We then use the “git add” command to stage the changes for the next commit. As we have one modified file we can use the following command:

git add .

This stages any new or modified files from our working tree.

Next we will commit our staged changes using the “git commit” command. In this case we can use the following example:

git commit -m "Fixed duplicate text"

The -m option allows us to specify a message for our commit. It’s good practice to try and provide a succinct, but descriptive message for your commits. This helps a reviewer understand at a high level what was addressed in each commit.

At this point we have made and committed out changes local to our development machine. Our final step is to push the changes to our fork of the allReady repository up on GitHub. We can do that using the “git push” command. We need to specify the name of the remote that we want to push to and the name of the branch we want to push up. In our example, the command looks like this:

git push origin 2204

This pushes our local 2204 branch to the origin remote, which is the fork of the allReady project which we created on GitHub.

Result of git push to GitHub

Summary

At this stage we have selected an issue to work on and begun that work locally on a new branch of the code. Once we had completed our changes we are able to commit those and push them up to our fork of the code on GitHub. In the next post we’ll look at how we create a pull request in order to submit our change to the project for inclusion.

If you are a visual learner, then I have a video which covers the topics in this post available up on my YouTube Channel

Other posts in this series

Part 1 – Forking and Cloning from GitHub
Part 2 – This post

Forking and Cloning from GitHub Contributing to open source projects (Part 1)

In this post I’m going to share the initial steps that you will need to take in order to be begin contributing to an open source project. I’ve covered this more generally in the past and wanted to provide a more focused post that covered these steps for a new open source contributor. For this post we’ll use the Humanitarian Toolbox allReady project as our example.

Forking a Repository

When beginning to contribute to a project on GitHub, your first step is to fork the repository. This creates a copy of that repository under your own account enabling you to begin working with the code. The rights to public repositories will be such that you can view the code, but not directly commit into the repository or create branches. This allows the project owners to control changes within their codebase. Changes are made via pull requests; which we’ll cover in a future post.

The forking step creates a copy to which you do have permission to commit and branch on and you can consider this your working copy of the project. You can make changes and commits here, safe in the knowledge that you will not affect the main repository.

The process of forking on GitHub is very simple. Make sure you are logged into your account on GitHub and then open the project you are interested in contributing to. In this example I’ll navigate to https://github.com/htbox/allready. Once inside the repository you will see a “Fork” button in the upon right hand corner of the UI. Click on this button to begin the automatic forking process.

 

GitHub Fork Button

 

 

GitHub forking progress

Within a few seconds you’ll have your completed fork.

Cloning your Fork

Now that you have your fork, the next step is to clone the code down to your local development machine. Again, GitHub make this quite simple in their UI. To clone a repository you will need its URL. Clicking on the “Clone or download” button will open a UI showing the Git URL. A button to the right hand side of the URL allows you to copy it into your clipboard.

GitHub Clone or Download button

To perform the clone operation I’m going to demonstrate using the command line. There are various graphical tools you can use to work with Git repositories but for simple procedures, the command line is often fastest.

Open a command window and navigate to the path where you would like to clone the repository.

Use the following command to begin a clone:

git clone https://github.com/stevejgordon-demo/allReady.git

Here we’ve pasted in the URL of the fork that we just copied as the argument to the “git clone” command. You will see the output of the clone command as it clones the contents of your repository onto your local device.

Git clone progress

Once the command completes you will have a new folder containing the cloned repository. We can validate this by running the “dir” command.

Directory after cloning

Next we’ll need to navigate into the newly cloned folder. Will do that with the following command:

cd allReady

Registering an Upstream Remote

The final step is to setup a remote which points to the main repository. Remotes simply represent paths or URLs to other versions of your repository. In our case, as we cloned from our fork on GitHub a default remote will have been setup for us called origin. This origin allows us to push and pull code from our forked repository hosted on GitHub. We can list the currently configured remotes on our machine using the “git remote” command.

Default git remote for origin

Pushing and pulling from your own fork is very useful and this will be how you will work with the project most often. However, when working on that code, you’ll want to be starting from the most recent version of the code from the main allReady repository. That code may have been updated and changed since you first made your fork. In order to get access to that latest code, we’ll setup a second remote which points to the main allReady repository. We will not have commit rights there, so we cannot push changes, however, we will be able to fetch the latest commits that have occurred.

To create a new remote we use the “git remote add” command, passing in a name for the new remote and the URL as arguments. First we need the git clone URL for the remote we want to add. We can get this by heading back to GitHub in our browser. From our fork we can use the convenience link to take us back to the main repository from which we forked it.

Switch to fork parent repository on GitHub

Once back in the allReady main project we can use the same steps as we previously used to access the clone URL via the “Clone or download” button and copy it to our clipboard.

Back in our command window; to add our remote for our allReady example we’ll use:

git remote add upstream https://github.com/HTBox/allReady.git

We could name the remote anything we like, but the convention is to use upstream, so we’ll follow that.

If we run the “git remote” command again we can verify that we now have two remotes.

After adding upstream remote

Summary

That’s as far as we’ll take it in this post. We have forked a copy of a repository, in this case allReady, and then cloned the code down to our local machine. We’ve down the work needed to setup an extra remote and we are now in a position to begin working on our first issue. We’ll cover that in a future post.

If you are a visual learner, then I have a video which covers the topics in this post available up on my YouTube Channel

Other posts in this series

Part 1 – This post
Part 2 – Working on Your First GitHub Issue

Upgrading to ASP.NET Core 2.0 My experience of upgrading a real-world solution from ASP.NET Core 1.0 to 2.0

On the 14th of August, Microsoft announced the release of .NET Core 2.0, ASP.NET Core 2.0 and EF Core 2.0, the next major releases of their open source, cross platform frameworks and libraries. This is a very exciting release and one which I hope marks the stabilisation of the framework and enables more developers and businesses to begin really looking at using .NET Core 2.0.

One of the big changes with .NET Core 2.0 is support for the new .NET Standard 2.0 specification (also part of the release announcements) which defines the API surface that platforms should conform to. This brings back around 20,000 APIs that were not originally included in .NET Core 1.x. This should mean that porting existing full .NET Framework applications over to Core may now be a more realistic prospect with much greater parity between the frameworks.

As I have discussed a few times on this blog, I contribute to a fantastic project called allReady, managed by the charity, Humanitarian Toolbox. This project started originally in the early beta days of .NET Core and ASP.NET Core and has evolved along with the framework through the various changes and refinements. With the release of 2.0 we were keen to upgrade the application to use .NET Core 2.0 and ASP.NET Core 2.0. I took it upon myself to attempt to upgrade allReady and to document the experience as I went. Hopefully I’ve found the right balance of detail to readability for this one which has been a bit of an epic!

Installing .NET Core 2.0

The first step you will need to complete is to install the new 2.0 SDK and if you use Visual Studio as your IDE of choice, you will also need to install the latest version of Visual Studio 15.3.x in order to work with .NET Core 2.0. These steps are well documented and pretty easy.

Upgrading the MVC Project

Upon loading the allReady web solution in Visual Studio 15.3 (aka 2017 update 3), my first focus was on upgrading the web project and getting it to run. I therefore unloaded the test project so that I wasn’t distracted by errors from that.

Many of the main steps that I followed as I upgraded the solution can be found outlined in the Microsoft .NET Core 1.0 to 2.0 migration guide.

Upgrading the Project File and Dependencies

The first job was to upgrade the project to target .NET Core 2.0 and to upgrade its dependencies to request the ASP.NET Core 2.0 packages. To do this I right clicked my project and chose to edit the csproj file directly. With .NET Core projects we can now do this without having to unload the project first. .NET Core projects have a targetFramework node which in our case was set to netcoreapp1.0. To upgrade to target the latest Target Framework Moniker (TFM) for Core 2.0 I simply changed this to netcoreapp2.0.

Our project file also included a runtimeFrameworkVersion property set to 1.0.4 which I removed to ensure that the project would use the latest available runtime. The migration guide also specifies that the PackageTargetFallback node and variable should be renamed to AssetTargetFallback and so I made that change.

The next big change was to begin using a new ASP.NET Core meta package to define our dependencies. One of the drawbacks that people have experiences with depending on the many individual Nuget packages which make up ASP.NET Core platform is that management of the package versions can be a bit painful. Each package can have slightly different minor version numbers as they revision separately. During a patch release of ASP.NET Core for example, it can be hard to know which exact versions represent the latest of each of the packages as they don’t necessarily all update together.

The ASP.NET team are hoping to solve this with the availability of a new Microsoft.AspNetCore.All metapackage. This package contains dependencies to all of the common Microsoft.AspNetCore, Microsoft.EntityFrameworkCore and Microsoft.Extensions packages. You can now reference just this package to enable you to work with all of the ASP.NET Core and EF Core components.

One of the changes that enables this is in the inclusion of a .NET Core runtime store which contains all of the required runtime packages. Since the packages are part of the runtime, your app won’t need to download many tens of dependencies from Nuget. The runtime store assets are also precompiled which helps with performance.

To make use of the new meta package I first removed all existing ASP.NET related dependencies from my explicit project package references. I could then add in the following reference: <PackageReference Include=”Microsoft.AspNetCore.All” Version=”2.0.0″ />. 

The final change in the project file was to update the versions for the .NET Core CLI tools specified in the DotNetCliToolReferenence nodes for our project file. In each case I moved them to the 2.0.0 version. With this completed I was able to save and close the project file, which triggers a package restore.

Our project file went from this:

to this:

The next thing I needed to do was to remove a global.json file that we had in our solution which was forcing the use of a specific SDK version; in our case 1.0.1. We want our project to use the latest SDK so I removed this file entirely. At this point I was in a position to attempt to compile the web project. As expected the build failed and a number of errors were listed that needed to work through fixing.

Identity / Authentication Changes

With ASP.NET Core 2.0, some of the biggest breaking changes occur in the Identity namespace. Microsoft have adjusted quite a few things regarding the Identity models and authentication. These changes did require some fixes and restructuring of our code to comply with the new model. Microsoft put together a specific migration document which is worth reviewing if you need to migrate Identity code.

The first change was to temporarily comment out some code we have as an extension to the IApplicationBuilder. I would use this code to ensure I had fully replicated the required setup before removing it. We used this code to conditionally “use” the various 3rd party login providers within our project; for example – UseFacebookAuthentication. One of the changes made with Identity in ASP.NET Core 2.0 is that third party login providers are now configured when registering the Authentication services and are no longer added as individual middleware components.

To account for this change I updated our ConfigureServices method to use the new AddAuthentication extension method on the IServiceCollection. This also includes extension methods on the returned AuthenticationBuilder which we can use to add and configure the additional authentication providers. We conditionally register our providers only if the application configuration includes the required App / Client Id for each provider. We do this with multiple, optional calls to the AddAuthentication method. I’ve checked and this is a safe approach to meet this requirement. At this point I could replicate the 3rd party authentication configuration that we had previously setup using the UseXYZAuthentication IApplicationBuilder extensions.

With this complete, our Configure method could be updated to include the call to UseAuthentication which adds the authentication middleware. The commented code could now be removed.

IdentityCookieOptions

Our account controller (based on the original ASP.NET Core MVC template) had a dependency on IOptions<IdentityCookieOptions> to get the ExternalCookieAuthenticationScheme name. This is now redundant in 2.0 as these are now available via constants and we can use that constant directly in our login action as per the authentication migration guide.

In 1.0 we set our AccessDeniedPath for the cookie options as one of the options on the AddIdentity extension for the IServiceCollection. Where we previpusly set it as follows:

There is now a specific extension to configure the application cookie where we set this value so I added that code to ConfigureServices.

The next change is that IdentityUser and IdentityRole have been moved from the Microsoft.AspNetCore.Identity.EntityFrameworkCore namespace to Microsoft.AspNetCore.Identity; so our using statements needed to be updated to reflect this change in any classes referencing either of these.

Next on my build error hit list was an error caused by Microsoft.AspNetCore.Authentication.FailureContext no longer being found. This has been renamed to RemoteFailureContext in ASP.NET Core 2.0 so I updated the affected code.

Another change as part of Identity 2.0 is that the Claims, Roles and Login navigation properties which we made use of have been removed from the base IdentityUser class. As a result I needed to add these back into our derived ApplicationUser class directly and update the OnModelCreating method inside our DbContext to define the correct foreign key relationships. This was as described in the migration guide for Authentication and Identity.

A small change I had to take care of is that GetExternalAuthenticationSchemes has been made Async (and renamed accordingly) so I updated our code to call and await the GetExternalAuthenticationSchemesAsync method – The return type has also changed, so I also needed to update one of our view models to take the resulting list of AuthenticationSchemes rather than AuthenticationDescriptions.

The final authentication change was the result of a new set of extension methods being added to HttpContext in Microsoft.AspNetCore.Authentication. These are intended to be used for calling the SingOutAsync and similar methods which were previously available via the IAuthenticationManager.

In places where we called these I changed from

await httpContext.Authentication.ChallengeAsync();

to

await httpContext.ChallengeAsync();

Other Changes / Build Errors

With the authentication and Identity related changes completed I still had a few build errors to take care of before the application would compile.

In 1.1.0 Microsoft added an additional result type of AcceptedResult (the issue is available here) and a helper method on ControllerBase to easily return this result. Since we had been target 1.0.x we had not faced this change before. Our SmsResponseController was exposing a constant string called “Accepted” which then hid the new inherited member on ControllerBase. I renamed our member to avoid this naming conflict.

We also found that Microsoft.Net.Http.Headers.ContentDispositionHeaderValue.FileName had changed from being defined as a string to a StringSegment instead. This meant we had to update code which was calling Trim on it to first call ToString on the StringSegment value.

In one place we were using a previously available TaskCache.CompletedTask to get a cached instance of a completed Task. However, since Task.CompletedTask is now available due to targeting NetStandard 2.0 this had been removed so our code could switch to using Task.CompletedTask instead.

Other Migration Changes

There are some other structural changes we can and should make to an existing ASP.NET Core 1.x project to take advantage of the ASP.NET Core 2.0 conventions. The first of these was to update program.cs to use the newer CreateDefaultBuilder functionality. This method is designed to simplify the setup of an ASP.NET Core WebHost by defining some common defaults which we previously had to setup manually in the Startup class. It adds in Kestrel and IISIntegration for example. The IWebHost in 2.0 now also sets up configuration and logging, registering them with DI earlier in the application lifecycle. The defaults work for basic applications but depending on your requirements you may need to use the ConfigureLogging and ConfigureAppConfiguration methods to apply additional setup of these components.

Out program.cs changed from:

to

Now that Configuration and Logging are setup on the IWebHost, we no longer need to define the setup for those components in the Startup.cs file, so I was able to strip out some code from Startup.cs. In 1.x we used the constructor of Startup to use the ConfigurationBuilder to setup Configuration. This could be taken out entirely. Instead we could ask for an IConfiguration object in the parameters which will be satisfied by DI as it is now registered by default.

I was also able to remove the logging setup which used an ILoggerFactory in the Configure method in 1.x. This is now also setup earlier by the IWebHost which feels like a better place for it. It also means we get more logging during the application bootstrapping. One change I made as a result of relying on the defaults for the logging setup was to rename our config.json file to appSettings.json. appsettings.json is included by default using the new CreateDefaultBuilder so it’s better that our config file matches this convention.

Finally, ApplictionInsights is now injected into our application by Visual Studio and Azure using a hook that lets them place code into the header and body tags, so we no longer need to manually wire up the ApplicationInsights functionality. This meant I could strip the registration of the service and also remove some code in our razor layout which was adding the javascript for ApplicationInsights.

From out ConfigureServices method I removed:

services.AddApplicationInsightsTelemetry(Configuration);

From our _ViewImports.cshtml file I removed

@inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet

From the head section of our_Layout.cshtml file I removed

@Html.Raw(JavaScriptSnippet.FullScript)

Partial Success!

At this point the code was able to compile but I hit some runtime errors when calling context.Database.Migrate in our Configure method:

“Both relationships between ‘CampaignContact.Contact’ and ‘Contact’ and between ‘CampaignContact’ and ‘Contact.CampaignContacts’ could use {‘ContactId’} as the foreign key. To resolve this configure the foreign key properties explicitly on at least one of the relationships.”

And

“Both relationships between ‘OrganizationContact.Contact’ and ‘Contact’ and between ‘OrganizationContact’ and ‘Contact.OrganizationContacts’ could use {‘ContactId’} as the foreign key. To resolve this configure the foreign key properties explicitly on at least one of the relationships.”

To solved these issues I updated our DbContext fluent configuration in OnModelCreating to explicitly define the relationships and foreign key.

From:

To:

This got me a step further but I then hit the following error:

System.Data.SqlClient.SqlException: ‘The name “Unknown” is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.’

I tracked this down to a migration which sets a default value on an integer column using an enum. I found that I needed to explicitly cast the enum to int to make this migration work as expected.

Another step forward; but I still ran into issues. The next error I received was System.ObjectDisposedException: ‘Cannot access a disposed object.’ from Startup.cs when calling await SampleData.CreateAdminUser();

This was caused by a naughty use of async void for the Configure method. I removed the async keyword and used GetAwaiter().GetResult() instead since async void is not a good idea!

By this point I was really hoping I was getting somewhere. However next I had some odd issues with our TagHelpers. We have two tag helpers used to aid some datetime functionality. The errors I was seeing seemed to be due to the TagHelpers getting invoked for the head and body elements of the page. I’ve yet to spend enough time to track down what causes this so have applied workarounds for now.

On our TimeZoneNameTagHelper we were getting a null object error when this tried to apply for the head tag. We expect a TimeZoneId to be supplied via an attribute which was not present on the head tag and so this resulted in null TimeZoneId when we tried to use it to lookup the time zone with FindSystemTimeZoneById. The temporary fix in this case was to check the TimeZoneId for null and just returning if so.

With our TimeTagHelper I had to do an explicit check within the Process method to ensure the TagName matched “time”. This avoided it being applied for the head and body tags. I have created follow-up issues to try to understand this behaviour.

With these changes in place, the code was finally compiling and running. Yay!

Upgrading the Test Project

With the main web application working I was ready to focus on upgrading the test project and making it compile (and seeing if the tests would pass!) The first step here was updating the project file to target netcoreapp2.0 as I had done with the web project. I also updated some of the dependencies to the latest stable versions. This was partially required in order to restore packages and it also made sense to do it at this point since I already had a lot of changes to include. Some of our dependencies were still old pre RTM packages. I also took the chance to clean out some unnecessary nodes in the project file.

With the packages restoring, attempting a build at this stage left me with 134 build errors! Some as a result of changes to Identity, some due to upgrading the dependencies and some due to code fixes made to the main project as a result of the migration.

The first broken tests I focused on where any that had broken due to the Identity changes. These were relatively quick to update such as fixing changes namespaces.

to

I then had a number of tests which were broken due to a change in Moq, the library we use for mocking objects in our tests. When setting up methods of mocked objects we could previously return a null quite simply passing null as the parameter to ReturnsAsync. However there is now another extension method also accepting a single parameter and the compiler is not sure which one we are intending to use. So this now requires that we explicitly cast this as a null of the correct type to indicate we are passing the expected value and not a delegate which returns the value. This resulted in me having to update 46 tests.

The remainder of build failures were mostly caused by changing the number of parameters for the AccountController constructor so our tests which were creating one as the subject under test needed to be updated also to match the correct number of parameters.

At this point I had compiling test code and I was then able to run my tests! Oh, 101 failed tests!

When I looked a little deeper I noticed these were nearly all tests which used our InMemoryContextTest abstract base class which includes a registered instance of an InMemory DbContext on an IServiceProvider. With a bit of trial and error I realised that my queries were not returning any results, where previously they had in 1.o. When I experimented I found that it was in cases where our query called Include to eager load some of the related entities. However, our seed data for the test which populated the InMemory database for each test had not set those related entities. The InMemory provider does not enforce referential integrity and so there are no errors thrown when saving objects with missing required navigational properties.

In 1.x the query behaviour worked under this scenario but in 2.0 something had changed. I raised an issue about this one and the EF team responded quickly with… “The reason for the behaviour change is that now include is using navigation rewrite logic to construct the queries (whereas before we manually crafted include statements). Navigation rewrite produces INNER JOIN pattern for required relationships and LEFT JOIN pattern for optional. Before we would always hand-craft LEFT JOIN pattern, regardless of the relationship requiredness between child and parent.”

To correct for this I needed to ensure our test setups added the required related entities so that they would be returned from the queries as expected. In our actual code, running using the SqlProvider this is not an issue since the saves enforce the referential integrity.

With the tests fixed up I was finally at a point where everything compiled, ran and the tests were passing. I considered this a good place and was able to submit my PR to get the allReady project to 2.0 which was promptly merged in.

Summary

For the most part the migration documentation provided by Microsoft where very good and covered many of the things I actually experienced. In a few cases I found little extra things I needed to solve. For the most part the issues were around the tests and the EF changes probably took longest to isolate and then fix-up. It’s great to have been able to help move the project forward and get it to 2.0 very soon after release. It’s a great reference project for developers wanting to view (and hopefully work on) a real-world ASP.NET Core 2.0 solution. Hopefully my experience will help others during their migrations.

Migration

Migrating from project.json to csproj using Visual Studio 2017 Moving a real world ASP.NET Core application using VS2015 project.json to VS2017 and csproj

This past weekend I spent a few hours working on migrating the Humanitarian Toolbox allReady project over to the new csproj based Visual Studio 2017 solution format. I’ve written a few times about the plans to retire the project.json file introduced by ASP.NET Core. If you want some background you can read those posts first (don’t worry, I’ll wait)…

The summary is that in Visual Studio 2017 and the final ASP.NET Core SDK tooling, the project.json file was removed in favour of a revised csproj file. This has left some people confused and wondering why project.json is missing in Visual Studio 2017. The decision was mostly made to support customers reliant on using MsBuild and to support migrating larger projects to .NET Core.

Moving from project.json to csproj

In this post I wanted to cover the steps I took to migrate a real ASP.NET Core application to Visual Studio 2017. I’ve recorded the steps that I took and the issues I faced, along with any solutions I was able to find.

TL;DR; It wasn’t as painful as I’d expected it might be, but it also wasn’t entirely automated. There were a few issues along the way, which required some intervention on my part to finally get everything building correctly.

My first trial attempt at this process was actually about 2 weeks ago. At that time I was using the first RTM version of VS 2017 and I started by simply opening our allReady solution file with Visual Studio 2017. 

Visual Studio presents a dialog showing that any project.json projects will be migrated.

Visual Studio 2017 project.json migration dialog

After accepting that message VS will do it’s thing. Under the covers it’s calling the dotnet migrate command which handles the conversion for you. I decided to use Visual Studio 2017 for the migration, rather than the command line as I expect that’s the approach most people will be taking.

During migration; which took only a few seconds for me, you’ll see a progress window from Visual Studio.

ASP.NET Core project.json migration progress

Once the migration is completed you are shown a migration report.

project.json to csproj migration report

This initially led me to believe that things had gone pretty well. In fact, when I tried a restore and build things were not quite so positive. I was shown a staggering 982 errors and 3 warnings. It was late and I was tired, so I decided to abandon attempt one at that point! That was a problem for my future self!

Attempt Two

This weekend I was finally in a position to have another try at the migration. I took the latest version of our code and before starting, updated Visual Studio 2017 because I’d seen that it had some updates pending.

I repeated the initial steps as before and this time, although the result was still not looking great, it was a fair amount improved on attempt 1. Lesson 1 – Always do migrations with the most updated version of the IDE! This time I was at 199 errors and 2 warnings.

project.json Migration Errors

The errors were mostly relating to dependencies so it looked to be a restore issue. I tried cleaning the solution, restarting Visual Studio and even manually deleting the bin and obj folders. Each time I was hit with a wall of red errors when trying to build.

At this point I jumped onto Google and found a suggestion to clear the local nuget cache. I navigated to %USERPROFILE%/.nuget and deleted the contents.

Upon doing that, things were starting to look a little better. The errors cleared but I now had some warnings to deal with and the site would not run.

The second warning was a little strange.

The referenced project ‘..\..\Backup\Web-App\AllReady\AllReady.csproj’ does not exist.

The unit test project was referencing the web project but using the backup folder path. The backup folder is created during migration to hold the original project.json and xproj files in case you need to get back to a prior state. That folder of course didn’t include a csproj file. I removed the reference and re-added it, pointing it to the correct folder.

I also took the chance to move the backup folder so that it was outside of my main solution structure, just in case it was causing or masking any other issues.

Attempting to build at this stage reduced my errors but I still had 29 warnings. To be sure that Visual Studio 2017 wasn’t partly to blame, I also tried using the command line dotnet restore commands directly. However, I was getting warnings there also.

Inside Visual Studio the warning looked like this:

Detected package downgrade: Microsoft.Extensions.Configuration.Abstractions from 1.1.0 to 1.0.2

Trying to run the application at this stage resulted in the following exception:

System.IO.FileLoadException: ‘Could not load file or assembly ‘Microsoft.Extensions.Configuration.Abstractions, Version 1.1.0.0 … The located assembly’s manifest definition does not match the assembly reference.

For some reason the migration had chosen version 1.1.0 for the Microsoft.VisualStudio.Web.BrowserLink dependency when it created the csproj file.

<PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="1.1.0" />

The solution was to change it to 1.0.1 since allReady is targeting the LTS stream of ASP.NET Core currently. All of the other dependencies were registered with their LTS versions. This rouge 1.1.x dependency was therefore causing version conflicts.

<PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="1.0.1" />

At this point, building the solution was getting a bit further. I now had a single warning left for the web project. This warning stated that AddUserSecrets(IConfigurationBuilder) is now an obsolete method.

‘ConfigurationExtensions.AddUserSecrets(IConfigurationBuilder)’ is obselete. ‘This method is obsolete and will be removed in a future version. The recommended alternative is .AddUserSecrets(string userSercretsId) or .AddUserSecrets<TStartup>()..’

The warning explains that we were now expected to pass in the user secrets key as a string to the method or use .AddUserSecrets<TStartup> which is syntactic sugar over AddUserSecrets(this IConfigurationBuilder configuration, Assembly assembly). Previously the storage location for the user secrets ID was the project.json file. It’s now moved to an assembly attribute. The previous overload is not guaranteed to return the correct Assembly so this could produce issues. You can read a fuller explanation from the team on the GitHub repo.

In our case I changed the appropriate line in Startup.cs from

builder.AddUserSecrets();

to

builder.AddUserSecrets("aspnet5-AllReady-468aac76-4430-43e6-848e-f4a3b90d61d0");

Test Project Changes

At this point the web project was building but there were issues inside the test project:

Test project warning after project.json migration

Found conflicts between different versions of the same dependent assembly that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.

Some of the dependencies we had been using had newer versions so I took a guess that perhaps I needed to update the Xunit / test related ones. I changed these three dependencies:

<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0-preview-20170106-08" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.2.0-beta5-build1225" />
<PackageReference Include="xunit" Version="2.2.0-beta5-build3474" />

to

<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" />
<PackageReference Include="xunit" Version="2.2.0" />

Finally the entire solution was building, tests were running and I could start the website locally.

Initial thoughts on csproj and Visual Studio 2017

Generally, everything seems to work once inside Visual Studio 2017. The migration process was a little more effort than I would have liked though.

No auto complete

In Visual Studio 2015 we had version auto complete for the dependencies from NuGet when editing project.json. We’ve lost this in Visual Studio 2017 which is a shame. It now pretty much forces me into the NuGet Package Manager which I find slower. I understand that autocomplete might come back at some point, but it’s a shame we’re lacking it upon release. 

Less easy to scan

Personally, I still prefer the project.json format for readability. There’s still a little more noise in the XML than we had inside the JSON version. That said, the work done to the revised csproj format is great and it’s a vast improvement on the csproj of the past.

Summing Up

Overall the process wasn’t too harrowing. Visual Studio and dotnet migrate handled most of the work for me. It was a shame that I had to resort on a few occasions to Google for manual solutions to solve problems that I’m sure should have been addressed by the tooling. The restore of the dependencies not working without clearing the cache was an unexpected problem.

But with the work of migration behind me I’m ready to move on. I’ve reached the acceptance stage on my project.json grieving process!

Anyone new to ASP.NET Core, using it for the first time in Visual Studio 2017 will probably look at older blog posts and wonder what we were all moaning about. The csproj is a great improvement on what we had in the past, so if you’ve never seen project.json, there’s nothing to miss and only improvements to enjoy.