Things I’ve Learnt This Week (19th February)

Week 4 of my series, sharing things I’ve learnt, read, watched and listened to, in the pursuit of expanding my knowledge about software development. A slightly shorter set of links this week, things have been fairly busy so I’ve had less time to keep up with all of the fantastic content.

Things I’ve Learned

That I enjoy technical speaking! This week, I delivered a talk to a room of developers about ASP.NET Core. It’s a popular topic which I think generated the interest and high attendance. I’m quite shy and an introvert by nature, and I generally hate any form of public speaking so presenting a talk is way out of my comfort zone. However, it’s something I’ve been keen to work on; It’s a great way to share information and helps me learn as well. I had the usual nerves leading up to the talk, but I’d practised and refined it over a number of rehearsals, to the point that I was confident in what I was delivering. The talk also included a 30 minute live demo, which thankfully worked perfectly thanks to many practice runs. As soon as I got going I started to feel better and by the end, was on a bit of a natural high. I’ve had some very nice feedback from some of the attendees and I’m encouraged to work on future talks as well as hopefully sharing this one with a wider audience in the future. If I was asked for advice from other developers looking to prepare a technical talk, it would be practice, practice, practice! Having gone through my slides, rehearsing the full talk at least 10 times, I knew what I was going to say and that allowed me to present confidently.

Things I’ve Read

Things I’ve Listened To

Things I’ve Watched

Read More

Things I’ve Learnt This Week (12th February)

Week 3 of my series, sharing things I’ve learnt, read, watched and listened to, in the pursuit of expanding my knowledge about software development.

Things I’ve Learned

Dependency Injection in Middleware

An important thing to consider when building custom Middleware in ASP.NET Core – If you are injecting a dependency into the constructor of your Middleware it may have unwanted effects if that dependency is expected to be a scoped/transient resource. The Middleware is constructed once for the app lifetime during Startup and not recreated for every request. If you want a transient/scoped dependency, inject it into the Invoke method’s parameters which is called once per request and therefore will give you the correctly scoped instance of your dependency every time.

Dependency Injection Lifetime Fun

A little long for this weekly post, I wrote up a dedicated post this week about some “fun” I had tracking down a range of errors we were seeing in some code this week. Spoiler alert… scoped lifetime dependencies inside a singleton lifetime registration = not good!

I also published a post this week entitled “Migrating from .NET Framework to .NET Core” which describes the process I followed to move an ASP.NET Core application from the full .NET framework onto .NET Core, enabling cross-platform development.

Things I’ve Read

In no particular order here’s some of the blogs and posts that I’ve read this week.

Things I’ve Listened To

Things I’ve Watched

Tools

  • ChangemakerStudios – Papercut – A useful local SMTP receiver for testing email in your code on localhost. This week I was adding some email functionality to one of our services and I wanted to test the flow locally in development. This tool was really easy to use and worked perfectly for my requirements.

Read More

Migrating from .NET Framework to .NET Core The journey of re-targeting an ASP.NET Core application onto .NET Core

What better way to start the new year than with some coding? In my case I’d found a bit of time during the end of my Christmas vacation to tackle an issue on allReady I’d been wanting to work on for a while. Before getting into the issue, if you want to read more about what allReady is you can read my earlier blog post where I cover contributing to it in greater detail.

allReady was first created during the early beta’s of ASP.NET Core (at which point it was known as ASP.NET 5). At that time, due to some dependencies which did not yet support .NET Core, the application was built on top of the full .NET framework 4.6.

The first thing to discuss briefly is why and how we can run ASP.NET Core on the full traditional .NET framework. While it might be reasonable to assume that ASP.NET Core naturally needs to run on the new .NET Core framework, remember that .NET Core ultimately includes a subset of the larger full .NET framework API. Because of this, it is perfectly possible to run ASP.NET Core on the original 4.x version of the .NET framework. This is an advantage to those that want to make use of the new features and optimizations within ASP.NET Core, while also still utilising the libraries they know and love. This is a perfectly reasonable way to run ASP.NET Core but with an important limitation. By targeting the .NET framework you are not able to develop or host the ASP.NET application outside of Windows.

For the allReady project this had started to become a bit of a limitation. We had Mac users who were keen to contribute to the project, but were unable to do so without installing and running a Windows VM on their device. As we were getting to the end of some complex requirements for allReady’s v1 milestone, I took the opportunity to spend some time looking at retargeting the application to run on .NET Core. This turned out to be a bit more involved that I had first anticipated. However, we have succeeded and the code base has been proven to build and run on both Mac and Linux devices.

Updating the Solution

The first part of the process was to update the project.json file inside the web solution. The main update here was changing the framework specification to netcoreapp from net46.

project.json before:

project.json after:

We had decided as a team to continue to track the .NET Core 1.0.x LTS support stream. In short, this is a stable version of the framework, under full Microsoft support. New minor patch releases are appearing to fix major bugs or security issues, but otherwise the changes are expected to be quite limited. Our project was already targeting the latest 1.0.3 SDK and latest 1.0.x libraries for each of the components.

You’ll see above that as well as changing the framework moniker I’ve added an imports section as well. This was required in our case as we have some dependencies on some libraries which don’t yet specify a dotnet Target Moniker or .NET Standard version. It’s essentially there for backwards compatibility until the ecosystem settles down.

Handling Dependencies

Upon making the required changes to project.json, it became apparent that some of the packages we depend on could no longer be restored. The first thing I did was to use Nuget Package Manager to update all of the dependencies to their latest versions to see if any of those had been updated to support .NET Core. This helped in some of the cases but still left me with a few that were not compatible. For the time being I commented those out as well as commenting out any code using those dependencies. In our case this included the following main libraries we had issues with…

  • XZing which was being used in one place to generate a QR code image for use within the allReady phone application. This had its own dependency on System.Drawing which is not part of .NET Core.
  • LinqToTwitter which was being used in one area of the application to query for a user’s information after a sign in via Twitter.
  • GeoCoding.net which was used in a few places in order to take an address and geo code it to latitude and longitude coordinates.

With those taken care of for now (if only commenting out code was considered “fixing”), the last remaining issue for the web project was a dependency on a separate library project in our solution which contains some shared models. This library is used by both our web application and also in some Azure web jobs we have defined. As a result it needed to support both .NET Core and the .NET Framework. The solution here was to convert it to a .NET standard library. In my case I chose the lowest standard version I could find which gave me the API surface I needed, while supporting my dependent projects. This was the .NET Standard 1.3 version. The project.json for this library was changed as follows.

Before:

After:

With this done, I also had to update our test library project.json in a similar way in order to support .NET Core by changing the frameworks section and updating to the latest versions of our dependencies as with the main web project. The changes in this file were very similar to what I’ve already shown, so I won’t repeat the code here.

Wrapping missing dependencies

With the package restore now succeeding for the web project, test project and our .NET standard project my next goal was to find solutions to the libraries we could no longer use.

XZing was an easy decision as currently the phone application project is a bit stale and it was decided that we could simply remove the QRCode endpoint and this dependency as a result.

When I reviewed the use of LinqToTwitter it was only really abstracting one API call to Twitter behind a more fluent Linq syntax. We were using it to request a Twitter user’s profile information. The option I chose here was to write our own small service to wrap the call to the Twitter REST API which using HttpClient. There are some hoops to jump through in order to establish the authentication with Twitter but this ultimately didn’t prove too difficult. I won’t go into the exact details here since this post is intended to cover the general “migration” process.

The final dependent library that wasn’t able to support .NET Core was GeoCoding.net. We were using this library to perform address to coordinate lookups via Google. On review of the usages, it was again clear that we had a whole dependency for what amounted to a single call to a Google endpoint. We were already calling another part of the Google REST API directly from a custom service class, so again, the decision was to wrap up the geocoding functionality in our own code, calling the API directly. Again, the exact details are beyond my scope here, but I may cover this in a future blog post if there’s interest.

Handling API Differences

With the problem packages removed I tried to compile the solution. I was immediately hit by a wall of errors. This was initially a little daunting, but after looking through them I could see that many were quite similar and due to changes to the .NET Core API surface.

Reflection

The first of which was where we’d been using reflection. We have a couple of places in the web app that rely on reflection for wiring up our IoC container and also in some of the test classes. In .NET Core the Object.GetType() method no longer returns full type detail information, instead it returns just the type name. In order to access the full type information there is a new extension method called GetTypeInfo(). It returns TypeInfo which is similar to what Type used to return.

Before:

After:

In this example you can see that the only change is adding in the GetTypeInfo call.

Date Formatting

We also had some code calling ToLongDateString on dates. This is no longer avaiable so I switched to formatting the date using the ToString method instead.

Before:

After:

Exceptions

In a couple of places we were throwing ApplicationException which also seems to have been removed. I switched to a general Exception in this case.

Before:

After:

Final Tweaks

Once all of the compile errors were fixed we were nearly there. In order to enable the site to be run under the non IIS mode of Kestrel I updated the profiles in launchSettings.json

Before:

After:

This new profile basically tell VS to run the project (i.e. via dotnet.exe) – launching it on port 5000. Within Visual Studio you can choose whether to start via IIS Express or directly into the project via the dotnet.exe. New projects targeting .NET core include these two profiles automatically, so updating it in our project felt sensible, to make the developer experience familiar.

Conclusion

With the above steps completed the project was now compiling, tests were running and the site worked as expected. Quite a few hours of work went into the process. The longest time was spent building the new service wrappers around the 3rd party REST APIs so that I could get rid of incompatible dependencies. The actual project and code changes weren’t too painful and it was just a case of working out the API changes so I knew how to modify our code.

It was an interesting experience and once we had it tested on a Mac and Linux during the NDC London code-a-thon it was very exciting to see it working. This has made it much easier for non-Windows developers to contribute to the allReady project and is just one of the great new benefits of working with .NET Core and ASP.NET Core.

Read More

A Reminder to Take Care when Registering Dependencies

I looked into a “fun” little problem yesterday where we were seeing occasional errors in some of our ASP.NET Core code which calls down into the ASP.NET Core Identity UserManager. We were getting a range of NullReferenceException and ObjectDisposedExceptions as well as various exceptions from Npgsql.NpgsqlConnection (we use Postgres rather than SQL in this project) stating things such as “Connection already open”.

The issue was presenting itself within some authentication and authorisation code we have which wraps and extends the ASP.NET Core Identity UserManager and SignInManager functionality. We use ASP.NET Identity over a PostGres database for our user store but include some application specific functionality with our own code. We have our own AuthenticationManager and UserManager classes, both of which take dependencies on the underlying Microsoft.AspNetCore.Identity classes.

These originally got registered in in the ConfigureServices method of the Startup.cs class as follows:

services.AddSingleton<IAuthenticationManager, AuthenticationManager>();
services.AddSingleton<IUserManager, UserManager>();

The constructor for our AuthenticationManager looks a bit like this:

public AuthenticationManager(UserManager<ApplicationUser> userManager, SignInManager<ApplicationUser> signInManager)
{
   // setup our authentication manager here
}

Do you see the problem?

The issue here is the singleton registration. While our classes themselves have no state and could be shared between requests, the dependencies on Microsoft.AspNetCore.Identity.UserManager<T> and Microsoft.AspNetCore.Identity.SignInManager<T> have to be considered.

If we take a look at where these are registered within the Microsoft.AspNetCore.Identity source we can see the following:

services.TryAddScoped<UserManager<TUser>, UserManager<TUser>>();
services.TryAddScoped<SignInManager<TUser>, SignInManager<TUser>>();

They are added using the scoped lifetime which means they expect to be created once per request. They themselves depend on an IUserStore which is registered with the scoped lifetime as well.

As a result, the singleton registration of our AuthenticationManager was trying to hang onto dependencies to objects for the entire application lifetime, where those dependencies only expected to live for the request scope. Sometimes they seemed to get a different database context and hence the “Connection already open” errors we saw. Sometimes the dependencies had been disposed of by the time they got called by our code and as such we saw the various exceptions being thrown. In some cases we seemed to still have access to the UserManager but the context underneath was null. I won’t dive into this tool deeply, but it was apparent that we should really make the registration of our classes scoped as well. This way, they are created once per request, the same as their dependencies. We changed the registration code to the following:

services.AddScoped<IAuthenticationManager, AuthenticationManager>();
services.AddScoped<IUserManager, UserManager>();

This resolved the errors we were seeing immediately. It was an annoying oversight although fortunately it was fairly easy to guess at the cause. The various errors all suggested that we having some issues with the lifetime of the  dependencies. This case has been a reminder to carefully consider the lifetimes of service registrations as it can produce some unexpected errors and behaviour.

The Microsoft documentation even includes a large warning about scoped services – “The main danger to be wary of is resolving a Scoped service from a singleton. It’s likely in such a case that the service will have incorrect state when processing subsequent requests.” – Oh how true this is!

Happy dependency injecting!

Read More

Things I’ve Learnt This Week (5th February)

I’m keeping up my plan (week 2 yay!) to record and share things I’ve learned during the last week. Less from me this week as it’s been fairly hectic and I’ve been feeling a bit unwell.

Things I’ve Learned

Identity Server 4

As part of my work for Humanitarian Toolbox I’ve been actively investigating Identity Server 4 as an option to handle our authentication. The allReady application currently uses ASP.NET Core Identity within the application to support login and user management. As the product nears v1 release discussions have begun around the use cases for the application, including potential for multi-tenancy and considerations around storage of user accounts. As a result this led Richard Campbell to suggest we look into Identity Server 4 to help with this identity flow.

I was lucky enough to have an audience with Brock Allen and Dominick Baier this week to chat through some of the basics about Identity Server. It was really useful to chat with them both and I want to thank them for offering their time and support to the project. One of the key take-aways for me from the call was getting a better understanding of where Identity server fits into the puzzle. It’s about authentication and helping with the protocol of OAuth2 / OpenId communications. It’s not a user management / user store product, although it does sit nicely on top of ASP.NET Core Identity 3 as an option.

As I continue investigating and testing Identity Server 4, I hope to put together some more details posts about how we’re using it and what I learn along the way.

Things I’ve Read

In no particular order here’s some of the blogs and posts that I’ve read this week.

Things I’ve Listened To

Things I’ve Watched

Read More

Things I’ve Learnt This Week (29th January)

Whether this will become a regular thing, I’m not sure. But starting this week I’ve been keeping notes on the things I’ve learned, problems I’ve faced and resources that I’ve read, watched or listened to.

I try to consume as much information as possible about ASP.NET and development in a continual drive to learn more and get better at what I do. This includes listening to a regular set of podcasts on my daily commute, reading any blog posts that I can find that relate to things I do or may be doing in the future, and watching videos online. My current focus is around ASP.NET Core so a bulk of the materials I am reading tend to be focused in that area.

I won’t go into explicit details in these posts, as realistically I won’t have time. But I hope to highlight key points of information I have found useful and to share links to things I’ve learned from, hopefully so that others sharing my passion can save some time. It’s also a shameless way to help me remember things as my brain will only hold information for so long!

Things I’ve Learned

Not an exhaustive list (as we’re always learning and that’s one of the things I love about development) but here are a few key things which came to mind after the week has ended. I’ll contain this section to small snippets of information that do not generally warrant a longer, dedicated post.

ASP.NET Core RTM SDK Tooling

I picked up on a point that Damian Edwards mentioned on the weekly ASP.NET community standup this week around the final SDK tooling where I thought I heard him say that for RTM tooling we had to be using VS 2017 when it’s released. I must admit I hadn’t realised this or considered the implications of the move to a refined csproj (from project.json) for ASP.NET Core.

I tweeted Damian to clarify this and he was kind enough to answer my questions. The outcome, as I’ve interpreted it, is that indeed there will be no supported RTM tooling for ASP.NET Core on Visual Studio 2015. The tooling we have now which is preview tooling, will remain available, but unsupported. To get a supported ASP.NET Core tooling experience, developers will need to move to VS 2017 or use VS Code.

The nature of the all new csproj format is that it cannot/will not be implemented in VS 2015. Any projects which are opened on VS 2017 will auto migrate to the newer csproj format and after that, cannot be developed on VS 2015 any longer. It also seems that when ASP.NET Core 2.x lands, that will be csproj and VS 2017 supported only.

I was a bit shocked to learn (and perhaps I was just slow on the uptake) that the above was the case. I had assumed we might get a RTM tools for VS 2015 ASP.NET Core since people have adopted this new platform and will be left with an upgrade if they do want to continue with the full IDE and proper support. Working on an open source ASP.NET Core project as I do, this means we have to think carefully about when / if we bite the bullet and force a VS 2017 / VS Code only experience by upgrading the project.

Setting cache expiry for static files using OWIN

This week I needed to ensure that javascript and css files served via our site had a proper long cache expiry to help optimise page load times. I’ve used middleware in ASP.NET Core a fair bit, but not touched the ASP.NET 4.x OWIN code much. Our project was using the Microsoft.Owin.StaticFiles.StaticFileMiddleware for serving the static files and it turned out that the change to add the Expiry header was very similar to code I’d used in ASP.NET Core for a different, but similar requirement.

In our case all I had to do was ensure that the StaticFileOptions passed into the StaticFileMiddleware included an OnPrepareResponse action to handle setting the expires header like so.

return new StaticFileOptions
{
	OnPrepareResponse = (ctx) =&amp;amp;amp;amp;amp;amp;amp;gt;
	{
		ctx.OwinContext.Response.Expires = DateTimeOffset.UtcNow.AddYears(1);
	}
};

Misc

  • In VS 2017 we will be able to remote debug ASP.NET Core over SSH.

Things I’ve Read

In no particular order here’s some of the blogs and posts I read this week.

Things I’ve Listened To

 Things I’ve Watched

  • Domain-Driven Design: The Good Parts – Jimmy Bogard – I’m keen to understand DDD better and this was a nice talk on some of the concepts. I’m still keen to find some more code based examples but I felt this provided some good foundations.
  • ASP.NET Community Standup – I listen to this weekly to catch up on what’s happening with ASP.NET Core.
  • Public speaking with Scott Hanselman, Kendra Havens, Maria Naggaga Nakanwagi, Kasey Uhlenhuth, and Donovan Brown – This really came at a great time as I start to work on my public speaking. I want to be able to share my passion for ASP.NET Core with others and while I’ve done a few smaller talks at work I want to build up the level, quality and quantity of speaking I do. I will be continuing to build my confidence in smaller groups at work but I would like to get to the point where I can do wider audiences of strangers. I have my next talk on ASP.NET Core nearly finalised for a group of developers at work.

Read More

Updating an ASP.NET Core Site to the December 2016 Release How to upgrade a site on the LTS 1.0.3 version of ASP.NET Core

I run into an issue this week during what should have been a simple ASP.NET Core application update. I wanted to share my experience in case others run into similar problems. Also, I’m sure to be back here myself to remember this in the future!

On December 13th Microsoft released their second minor patch release for the LTS (Long Term Support) track of .NET Core. ASP.NET Core releases on two tracks depending on how cutting edge you want to be. LTS is the “safer” track, which will be supported and bug fixed during the support lifespan. The other track is FTS (Fast Track Support) which will be where new features appear. You can read more about this on the Microsoft Blog.

As you may be aware from reading my other posts, I’m contributing to an opensource charity project called allReady. We’re currently using the LTS track packages and at the time of writing still targeting the full .NET framework (as opposed to .NET Core). We had applied the last patch release 1.0.1 packages in September without any major problems so I was hoping for the same experience with this patch release.

The details for the release were made available in this Microsoft blog post. If you follow the links to the release notes you will see that the ASP.NET Core updates are considered version 1.0.3. This is where the versioning starts to get a little murky in my opinion. ASP.NET Core itself has a version number (now 1.0.3) which tracks general “releases” of the framework. However, the individual packages that actually make up .NET Core and ASP.NET Core also have version numbers and revisions. Those numbers don’t track with the main release version, so it starts to get a bit confusing. You won’t for example find a package for Microsoft.AspNetCore.Mvc at version 1.0.3. The latest for that package is 1.0.2.

I’ll now step through how I upgraded our project and then discuss the issue I experienced with the EF commands for entity framework. Before starting to update the project I made sure to install the latest version of the 1.0.3 SDK from the Microsoft website.

Update Package.json

This is where the first pain point came for me. It wasn’t listed specifically in the blogs posts or release notes all of the package which had updated and what the latest package versions were. So my initial solution was to turn to the VS Nuget Package Manager where I was hoping I could simply update all of the Microsoft packages to the latest versions. However, since the package manager lists the latest (non pre-release) versions, it was offering me the FTS 1.1.x versions. So a simple, upgrade all option was out of the question.

Next I went into the project.json manually planning to update each package by hand, allowing autocomplete to give me the latest versions. However autocomplete didn’t always seem to pick up the latest version number for me automatically and I was worried about missing something. So I reverted back to the Nuget Package Manager and went one by one through the Microsoft packages. I used the install dropdown to select the newest LTS version 1.0.x for each one. This was slow and manual but at least meant I knew what options I had and could be explicit in choosing the latest version i wanted.

Here’s a rundown the packages from our project.json that I needed to update and the versions number they are are now on (which should be the latest LTS release). Note that our project.json may well differ for newly generated projects so you may not have all of these packages and you may even have dependencies listed that we do not.

"Microsoft.EntityFrameworkCore.SqlServer": "1.0.2",
"Microsoft.EntityFrameworkCore": "1.0.2",
"Microsoft.ApplicationInsights.AspNetCore": "1.0.2",
"Microsoft.AspNetCore.Mvc": "1.0.2",
"Microsoft.AspNetCore.Mvc.TagHelpers": "1.0.2",
"Microsoft.AspNetCore.Authentication.Cookies": "1.0.1",
"Microsoft.AspNetCore.Authentication.Facebook": "1.0.1",
"Microsoft.AspNetCore.Authentication.Google": "1.0.1",
"Microsoft.AspNetCore.Authentication.MicrosoftAccount": "1.0.1",
"Microsoft.AspNetCore.Authentication.Twitter": "1.0.1",
"Microsoft.AspNetCore.Diagnostics": "1.0.1",
"Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Server.IISIntegration": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.2",
"Microsoft.AspNetCore.StaticFiles": "1.0.1",
"Microsoft.AspNetCore.Cors": "1.0.1",
"Microsoft.Extensions.Configuration.Abstractions": "1.0.1",
"Microsoft.Extensions.Configuration.Json": "1.0.1",
"Microsoft.Extensions.Configuration.UserSecrets": "1.0.1",
"Microsoft.Extensions.Logging": "1.0.1",
"Microsoft.Extensions.Logging.Console": "1.0.1",
"Microsoft.Extensions.DependencyInjection.Abstractions": "1.0.1",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.1",
"Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.1",
"Microsoft.AspNetCore.Mvc.WebApiCompatShim": "1.0.2",
"Microsoft.Extensions.Logging.Debug": "1.0.1",

I also had to update the dependencies for some of these in our test library, so remember to check there too.

To complete the upgrade I also adjusted the SDK version in our solution’s global.json as follows…

"sdk": {
  "version": "1.0.0-preview2-003156",
  "runtime": "clr",
  "architecture": "x86"
},

With the above changes made I was able to build and run our site via Visual Studio. Great!

It wasn’t until a couple of days later when I hit an issue. During an issue I was working on I’d updated our model classes Entity Framework and needed to build my next entity framework migration. I did this by running the usual command…

dotnet ef migrations add AddNotifications

After building the project I was faced with the following error:

Could not load file or assembly 'Microsoft.EntityFrameworkCore, Version=1.0.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

At this stage I tried a few things, none of which fixed the problem outright, although they may have contributed to the overall solution. Firstly I cleaned my solution and rebuilt, no joy. Then I wondered if Nuget had cached any incorrect versions of the package, so I cleared my local Nuget cache and tried restoring my project dependencies again. Still no joy! Finally I hopped onto the ASP.NET Core Slack channel and sought help there. It was with huge thanks to Chad Tolkien then he suggested a manual deletion of my bin and obj folders within the project. I did that and rebuilt the solution. Success! Finally I was able to generate a migration using the EF CLI tooling. So it seems the clean and restore steps previously hadn’t cleaned everything they needed to.

I’d love to know if there’s a better way to manage these updates currently? I’m hoping that with the final tooling release and VS 2017 things will get easier. It would be useful for example, to be able to choose which track you want to use within Nuget Package Manager. I’m not sure how that would be achieved exactly but it would distinguish the packages you really want to get the latest within your chosen support track. It would also be handy if Microsoft blog posts about each release include specific details of each updated package and it’s latest version number. Having a quick reference when updating dependencies would have made my life a little easier. There are some release notes which hint at the main packages and their new version number, but it didn’t include all components that I ended up changing.

Read More

Developers working hard during the code-a-thon

Running a Humanitarian Toolbox Code-A-Thon

I’ve been involved with the Humanitarian Toolbox allReady project for about one year now. I’ve really enjoyed being able to contribute to the cause, while learning plenty along the way. Contributing has made me a better programmer and exposed me to new libraries and techniques which I am already benefiting from in my day job and in other projects.

During my time as a contributor I’ve be aware of the code-a-thons which have taken place in the US, Canada and more recently Europe. They’ve always seemed like great events and I’m excited to be signed up for the 2-day code-a-thon in London next month. A few months ago I discussed my experience with allReady and my wiliness to attend the London event with the management at my employer, Madgex Ltd. They were very receptive and were keen to support the cause. In fact, they are now funding 4 developers (including myself) to attend the full two days in London. Madgex are covering the transportation costs and hotel accommodation as well as loosing 4 developers for 2 days so this is a generous contribution.

In addition to the NDC event, we had discussions about arranging a code-a-thon at our office in Brighton, one evening after work. I was really keen to get something planned in and allow other developers to be able to contribute and experience allReady. With the concept green lighted by senior management, we set about making the idea a reality.

Planning

The starting point was to arrange a 30 minute presentation about Humanitarian Toolbox and allReady to gauge interest and demonstrate the application. I prepared a set of slides and a short demo which I presented in our board room one lunchtime. We had a good turnout of about 16 people in the room and it planted the seed with a few of the attendees, who talked to me afterwards about getting involved.

With the awareness raised, I followed up with some emails to the Madgex staff to further determine if people would be willing to join an event locally. I got some positive indications from enough people, so we picked a date and the invites went out to the staff. We planned our initial event for one evening after work. We agreed that 3 hours would be most practical given that people had already done a full day of work. Over the next few weeks I continued to promote the idea and started to confirm attendees for the evening. We had 7 or 8 people showing good interest as we closed in on the planned date. This was all supported by our developer lead at Madgex, Steve (great name!)

1 week to go

With a week to go we started to ramp up our planning activities. Steve (developer lead), helped with the physical space we needed and secured budget for some pizzas on the night (always a great lure for developers!).

I started combing through issues on GitHub that would be suitable for new contributors. We have a lot of work on-going with the project, pushing towards our v1 release but many of the issues that are left are quite complex and require a reasonable knowledge of the codebase. The balance was finding interesting and diverse issues, whilst ensuring that they wouldn’t require too much time up-front having to learn about the entire application. I had decided to try the new project feature inside GitHub to plan and organize the event. I created a few statuses and dropped issues into the “Not Started” category. The project feature is a basic Kanban board which allows cards to be dragged between statuses as the work progresses.

I also put together a pre-requisites list for the confirmed and tentative attendees, guiding them to ensure they had the latest tooling for .NET core installed, had forked and cloned the repository and were able to get it building. This is an important activity since it’s a little time consuming and we didn’t want to spend most of the time at the code-a-thon performing the setup work. As the big day approached we started to firm up numbers so we knew what cabling infrastructure we would need.

On the day

On the morning of the event, I did a final round up of the staff to confirm final attendees. After completing this we had about 6 locally able to attend as well as one of our developers from our North American office in Toronto, Canada. I started collecting GitHub usernames so that we could add people as contributors on the repository. This isn’t absolutely required, but aids in assigning issues to specific people. We also arranged to get people added to the Humanitarian Toolbox Slack channel so they could ask questions from our project experts and the Humanitarian Toolbox founders during the event.

At 4:30pm (with 30 mins to go) I moved into the meeting room we would be using for the evening. I wanted to get the webcam setup and ensure we had the cables for power and networking. Our fantastic systems technician Ricky had beaten me to it and already sorted the cabling requirements. We tested out the remote Skype link to Luke in Canada and made sure we had everything ready. A big thanks to Ricky for volunteering his time to provide some tech support and make sure we got up an running so smoothly.

Humanitarian Toolbox Code-A-Thon Sign
You must have a nice sign when running a code-a-thon!

At 5pm our team of developers migrated like birds in the winter, to the meeting room. We had 5 developers joining me locally, plus Luke video conferencing with us from across the pond. We’d decided to relocate our desktop PCs into this common area so that it would be easier to support each other during the event. While this required a little time up-front to disconnect and reconnect the PCs, it proved a good move as I was able to answer questions, share information and demo things very easily for everyone.

Preparing for the event
Developers getting setup for the event

By about 5:20pm we were in good shape, the computers were moved, developers were settled, pizza orders had been taken (priorities people). We started with a brief Google Hangouts standup with the Humanitarian Toolbox team. James and Tony introduced the project, it’s goals and thanked everyone for taking the time at the end of their working day to stay on and code for the greater good. We also had one of the most regular project contributors Mike on the call, showing his support for the event. Being able to speak to the founders and project team made for a great start and really set the tone for a fun evening, supporting code to save lives.

It’s worth pointing out here a couple of things that James and Tony highlighted during the standup. Firstly, the initial use case for the application will be to aid the American Red Cross with the effort to install free smoke alarms in people’s homes. Already this initiative has helped save lives when disaster struck and a family’s home caught fire. Thanks to a smoke alarm installed by the Red Cross, the family were alerted to the fire and all able to evacuate safely. Also important to note is that for each hour of coding time spent on the application we can expect about 40 hours of volunteer time saved. That’s a huge return and really shows that even sparing a few hours of personal time can have a massive impact.

With the project introduced and the devs eager to get going we started the team off by finding issues people were keen to work on. Sarah, Roberto and Luke took on some unit tests, Patrick picked up a new feature requirement, while Chris dove in with some EF and migrations code for the first time. Mark one of our front end developers started looking at the homepage UI improvements. I floated around the room to answer questions, demonstrate the site functionality and to help with the GitHub flow. I really enjoyed watching new contributors getting up to speed with the code and being able to assist their learning as they progressed. It was good to be able to witness some of the common questions new contributors have so we can focus on lowering the barrier to entry in the future.

Developers working hard during the code-a-thon
Developers working hard during the code-a-thon

There were a lot of new things for everyone to learn and they did a fantastic job of absorbing the information and becoming productive quickly. All of the developers were new to GitHub and OpenSource, so there was learning to be done around the processes to ensure an up-to-date master branch, to manage rebasing and prepare pull requests. Most of the team were also experiencing ASP.NET Core for the first time, which in itself has a lot of new concepts to learn. It was also the first exposure to Entity Framework for everyone, so that had it’s own learning curve too. This really highlights another benefit of contributing to the project for developers. It’s a great learning experience that puts a real-world, production-ready code base in the hands of developers.

During the 3 hours everyone knuckled down and other than a brief break to load up our plates with some pizza, we worked solidly. I had hoped to use the GitHub project feature to keep track of things, but we hit some issues with people not being able to move the cards themselves. While I was able to do this, it proved more of a hindrance as my time was better spent helping people around the room. In the end we abandoned that feature and in hindsight a post-it-note board might have been easier to manage. I still like the concept of using GitHub so that others not physically at the event can monitor progress, but we need to find a way to allow contributors to manage the cards as they work on issues. I suspect it might just be a permissions thing, so I’ll investigate it soon.

By the end of the evening we had submitted two pull requests which were reviewed and merged into the project before we left. We also had three other issues very close to being completed which will be finished off in the coming days and hopefully submitted soon. Given the setup time, huge learning curve and relatively short coding period of three hours, I’m very pleased with this achievement. Everyone was amazed when we realised that we had hit the 8pm finish already. Time flew, which is a great sign and from feedback so far, people would have liked to have even longer with the code. Perhaps an all day event ison the cards for the future.

I really hope that everyone left feeling as positive and happy as I did. Certainly the sense I got was that everyone enjoyed learning some new things, getting to grips with the code and contributing to a good cause. I feel proud to be part of such a generous team of people who were able to join this code-a-thon after a full day of development at work first. Everyone should be very proud of what they managed to contribute. I’d love to run another session to continue the great start we’ve made and if people are willing, perhaps we can make it a regular thing or even look at a longer full day event.

From a personal perspective I enjoyed sharing what I’d learned during my time with the project and seeing other developers pick up the concepts for themselves. Prior to this, I’ve never been too keen on the idea of being a “teacher” and even presenting is not in my normal comfort zone, but I found a bit of a passion for instructing people. I’m a firm believer that by sharing information, it helps our own understanding as you are challenged to know enough to be able to articulate the concepts.

Code-a-Thon Team Photo
Code-a-Thon Team Photo (Sarah had escaped just prior to this being taken!)

Feedback

The feedback the day after has been extremely positive. I’m very happy to hear that people enjoyed themselves and had a positive and fun experience. It’s nice to spend time coding for fun, outside of the normal day-to-day work. Being able to put your skills to use towards such a positive concept is also very rewarding. From a quick survey afterwards, the team are keen to continue to contribute to the project and would like to take part in another code-a-thon in the future.

Thanks again to everyone who took part:

Our developers: Sarah, Patrick, Chris, Mark, Roberto and Luke
Tech support: Ricky
Planning and management: Steve K
Humanitarian Toolbox Support: James, Tony and Mike

Read More

Custom ModelBinding in ASP.NET MVC Core How to HTML Encode Strings

In a previous post I showed how we could automatically HTML encode data when deserialising it from a JSON request body. In my case this was to meet some specific security requirements we had for an ASP.NET Core API we were building. This time around I will discuss a similar requirement which stated that we should also ensure we HTML encode any strings bound from the route, querystring and form data. We’ll do this by creating a custom ModelBinder.

Much like our earlier requirement, we needed to ensure that we are not storing un-encoded data containing HTML or script tags in our database. While our application escapes the data on the way out, we want to prevent anyone accidentally rendering un-escaped HTML in future applications. We have no requirement to accept HTML code on our API so we decided to ensure each value was encoded by default during binding. As before we wanted to enable this globally so that no developer would have to take specific steps to enable this per controller or action.

Model Binding Flow

Before I jump into the solution, I’ll firstly explain at a high level how the model binding flow works in ASP.NET Core MVC. To understand this better I ended up following steps from this blog post where I discuss how we can add the MVC source solution to our code, allowing us to debug into it. With this in place I was able to step through the model binding code to watch and understand what was happening.

Model binding in principle is quite straight forward. It attempts to match up values coming in on the request to any properties expected by the parameters of the controller and action. Each value is run through the model binding flow which looks to find a suitable binder to handle the value. To find a suitable binder, ASP.NET MVC Core uses binder providers. These providers are registered when MVC is initialised and by default includes 14 different providers. Each of these implements the IModelBinderProvider interface.

There is a provider to handle key value pairs for example, another for complex types and another for simple types. These binders are registered in a specific order and MVC checks each provider in that order during the binding process until it find the first provider which can provide a suitable binder for the object being bound. Each binding provider will have some conditions that check the binding provider context. Once these conditions are met, the provider will return a binder that can handle the binding.

Once we have a model binder available it’s BindModelAsync method is called. This method expects to handle the value of the object being bound and returns a ModelBindingResult. If the binding succeeds as expected then a Success is returned, including the final value to be bound to a property. I hope to spend more time exploring the binding process in a future post. It’s a little beyond the scope here to explain the deeper details of how everything is hooked up. For now, let’s look at how we meet the requirement above to url encode values during the binding process.

Let’s assume we have the following action.

We want to ensure that the newTitle property is HTML encoded by the time we can access it within the action.

Creating a ModelBinder

The starting point for our solution was to create a model binder based on the IModelBinder interface.

We initialize the binder passing in an IModelBinder as a fallback. In my case I am specifically expect to handle string values but I don’t want to worry about the cases for handling null or empty strings. That is already covered in the default SimpleTypeModelBinder provided in MVC. Therefore I chose to pass in a binder which I will pass the responsibility onto in those cases.

Within the BindModelAsync method we first call the the ValueProvider.GetValue method on the value provider in the bindingContext. We pass in the model name which returns a valueProviderResult. As long as there is a value in the result we first check if it’s null or empty. If so, this is where we call the fallback binder’s BindModelAsync method. If we do have a suitable value then we proceed to HTMLEncode it before creating a success ModelBindingResult. Finally we return a completed task using the internal helper TaskCache.CompletedTask.

Creating a ModelBinderProvider

Now that we have a model binder defined we need to create a provider which will determine if the modelbinder is suited to the object being bound. Here is the code…

This provider is pretty simple. We must implement the GetBinder method on the IModelBinderProvider interface. We use the MetaData on the ModelBinderProviderContext to determine if it can provide a suitable binder. This meta data includes some helper properties such as the IsComplexType flag to help us determine if we can provide binding for the object. In this case we are looking only for strings. If the object being bound is a string, then we can return our custom HtmlEncodeModelBinder. Otherwise we return null. When we return null the next binding provider will be given the chance to provide a binder. This continues until a suitable binder is found. You’ll notice that we pass in a new SimpleTypeModelBinder which will act as our fallback for any string null or empty cases we encounter during binding.

Now that we have a binder and a provider, the final step is to add this provider to the list of binding providers. Since these are executed in a set order we also need to place our provider in the right place. I’ve achieved this using the following extension to the MVC options.

What we are doing here is using LINQ to find the SimpleTypeModelBinderProvider in the list of ModelBinderProviders. Our binder provider need to run just before the simple type binder to ensure that we can have the opportunity to handle the string types with our HTML encoding logic. If we placed it after the SimpleTypeModelBinderProvider we would find that the binding flow never reached our code as the binding would already have been handled. We then get the index of the SimpleTypeModelBinderProvider and using that index we insert our binder provider into the list at that position. Now, when MVC binding occurs, our binder provider will be part of the binding process. If we inspect the ModelBinderProviders during debug it should now look like this:

ASP.NET Core ModelBinders List

You can see our new custom HtmlEncodeModelBinderProvider listed before the SimpleTypeModelBinderProvider.

Finally, with everything complete we can do the final wiring up. We call our options extension when adding MVC within the ConfigureServices method in Startup.cs

Now when the newTitle parameter is bound on our example action it will ensure that any html tags are safely encoded. As with all of my posts, I’ve taken the best approach I could think of to implement this. I welcome any comments and suggestions to improve this code or correct any mistakes!

Read More

R.I.P project.json – Out with the new, in with the old An early look at the VS 2017 csproj changes

Okay; the title of the post is a bit tongue in cheek! Since it was first announced that the new project.json format (developed by the ASP.NET Core team) was going to be retired in favour of the more traditional csproj file, the .NET community have had some very strong opinions. These have been shared on Twitter, in blog posts, on GitHub and via any other channels people could find. I will admit that when I heard about the change, I initially had some of the same views and concerns. Why would Microsoft take away my beloved project.json? However, I decided to wait it out and see what Microsoft actually produced before passing judgement.

A little bit of history

Let me take a step back here and try to explain a little bit of the history as I have understood it. In the early beta timeframe for ASP.NET Core (then called ASP.NET 5) the ASP.NET and .NET teams were working independently of one another. The .NET team were working heavily on the API surface for .NET Core, whilst the ASP.NET team were working on the ASP.NET Core application platform on top of the API the .NET team were making. The tooling to work with ASP.NET Core was in an early preview and at the time we had DNVM and DNU commands. These have since been consolidated and morphed to form the dotnet CLI.

In order to support development of ASP.NET projects, the team decided to take the opportunity to develop a brand new project file format. One which would be simpler and address many of the issues the community had experienced when working with csproj based projects. At the time, nothing existed specifically for .NET Core projects and it was felt that since ASP.NET Core represented a new beginning for the platform, it was as good a time as any to make changes. During the betas and release candidates people started to get their hands on the new project.json and xproj based solutions when building their web applications. The response was indeed very positive. project.json provided autocomplete of dependencies, ease of reading and a much simpler overall structure.

Personally, I was very pleased when project.json was born. I could easily understand it and it became a regular reference point to review my dependencies and manage my projects.

However, it was during the release candidate period that decisions started to get made around a wider .NET project format for the other application types. Customers were starting to share their concerns around project.json and the fact that it was not supported within MSBuild. What would this mean for large monolith projects, how would they even begin to migrate them to .NET Core? And, being fair, these were valid concerns. Starting with greenfield ASP.NET Core projects was fantastic. There was no need to concern ourselves with the past and the new new project.json was easy to get to grips with. But, working with an existing project would not necessarily be a simple conversion without some degree of work. So the announcements started to be made that project.json would be retired in favour of a more backwards compatible MSBuild ready, csproj format. It was too late in the day for this work to be completed before the RTM of ASP.NET Core 1.0 but we were warned that it would be worked on for release alongside Visual Studio 2017.

And that brings us to today, with the announcement of the release candidate for Visual Studio 2017 we can now get our hands on projects using the new (or should I say old) csproj project format. I’ve taken an early peek at the file to see what has changed.

Comparing project.json (VS 2015) with csproj (VS 2017 RC)

Microsoft have assured us while working on the change that they expected to port many of the improvements that the project.json file gave developers over to the improved csproj format.

Comparing project.json and the new csproj

In the screenshot above (hard to see due to the size) I’ve opened a default new Web Application project in VS 2015 (left) and VS 2017 (right). Side-by-side the obvious difference in that project.json is JSON and the csproj is XML. You will notice that the line length of the files is not too different. project.json comes in at 65 lines and csproj comes in at 77 lines. Already that is a significant reduction in lines from a traditional csproj file.

Here are the complete files:

The other thing which stands out to me is the readability. Despite the work Microsoft have done to reduce noise in the csproj file, I still find it much harder to scan than the project.json. The XML tags draw my eye away from the detail that I actually want to consume.

At the time of writing I’m still very new to working inside VS 2017 and indeed on my computer it’s proofing quite buggy. For example, a solution that I created inside VS 2017 and which had been working fine (1 web application and 1 test class library) no longer re-opens after I saved and closed it. Suffice to say, I won’t be using Visual Studio 2017 for production work at this time and may even remove it entirely as it seems to be affecting my experience inside VS 2015 too.

Once nice advantage of the csproj file over the project.json is that we can now include references to full (non-core) .NET projects as well. Previously the integration was not available.

Something I do miss however (unless it’s just not working for me) is proper autocomplete of the packages and versions. With the project.json I could start typing the name of a dependency and Visual Studio would show me potential autocomplete options. I could also choose from the available versions. This seems to be absent in the csproj file. Personally I only saw basic intellisense for some, but not all, of the xml tags.

Migrating from project.json

Migration from project.json can be achieved in one of two ways. These easiest is to open the existing xproj/project.json project inside Visual Studio 2017. The IDE will detect the project format and prompt for a migration. The other option is to run the dotnet migrate command directly in the command line. This should result in the same conversion.

Migration from project.json

In a quick test for me, the VS migration seemed to work quite well, although that was a single default web application project with basic dependencies. In theory it should work just as well for larger projects. Perhaps when we come to migrate the allReady project which I contribute to, we’ll see if there’s more to it.

Having done this with the project.json solution created in VS 2015 (as appeared above) we get the following csproj output.

Inside the project.json we have our main dependencies section and inside that we define the package and version. This is pretty clear and easy to read. Inside the csproj each package reference is added inside an ItemGroup element. It’s more verbose than the project.json.

Much of the content is similar and when you compare the files you can generally find where each part has been migrated. However you’ll need to remember the more complex XML schema in order to edit the file manually. Indeed, Microsoft seem to be pushing us away from this in favour of IDE tooling and the command line. I fear though, that those are never going to be as quick to work with as we’ve been used to with a quick edit of the project.json file.

Differences between the original csproj and the new csproj

There are some other notable changes between what we see in a traditional csproj file when compared to the newer csproj file. Firstly they have gotten rid of the use of GUIDS within the file, so that it is much more human readable. That’s a good change and one I’m happy to see.

We are no longer required to include all of the files within the csproj file in order for them to be considered part of the project. project.json gave us a much more favourable, include by default, behaviour and this has now been made available inside csproj. It’s ever so slightly different as we must use a wildcard style include line to tell the project to behave this way. New projects created inside VS 2017 have this by default. When working with teams of developers, the csproj file was a common area of merge conflicts. We should no longer have so many issues since the project file will not change simply because we’ve added new files to the project.

Package references are now integrated into csproj so this single file also includes any libraries our project is dependent on and any projects we reference. It’s nice to have everything in one place as we’ve become used to with the project.json format.

Another important change is that we can now edit the csproj file whilst the project is loaded. This was not previously possible and meant that any manual changes were slow to make as we had to first unload the project. Granted, it was rare that I wanted to manually edit the file in the full .NET framework days. To edit the csproj file we can simply right click on the project without unloading it first.

Edit csproj inside VS 2017

One gripe at this point that I experienced is a warning for inconsistent line endings every time I edit the csproj file.

Inconsistent line endings

Other project differences in VS 2017

As well as the direct differences inside the file, there’s a few initial tooling differences I wanted to highlight between VS 2015 and VS 2017 that I’ve noticed.

The main one I’ve seen so far is the consolidated dependencies folder. Before, we had a “References” folder which included any dependent libraries we added via Nuget and any project references. We then had a “Dependencies” folder which included Bower dependencies. Inside Visual Studio 2017 these have been placed together inside a single “Dependencies” container. Project references have their own sub-container to separate them from the Nuget dependencies.

dependenciesfoldervs2017_1

dependenciesfoldervs2017_2

Initial Impressions

I definitely don’t find the XML as readable as the JSON for reviewing the configuration of my project. This will improve with time as I get used to it, but there’s still too much noise in the file for my liking. It’s certainly a big improvement on the traditional csproj format though and the wildcard include of files is an important enhancement.

Personally, I still much prefer the project.json format and whilst I understand the business case the led Microsoft to their change of course, I feel it’s a shame the MSBuild system couldn’t have moved with the times, rather than dropping back to the xml based project file.

The lack of autocomplete for dependencies bothers me too. This was really handy and I could quickly build up my dependencies without having to open up the Nuget package manager each time. It feels like we’re now being forced down that route which proves to be a bit slower and less efficient.

It’s still very early days and I need to spend more time with the new version of csproj in a final stable version of VS 2017 before passing a final judgement. We also have to remember that the .NET SDK tooling is not quite fully baked yet either. At this early point I’m running into quite a few issues running VS 2017 so it’s hard to fully appreciate the final experience.

The Microsoft statements suggest that in reality the tooling and command line interface should negate the need to spend much, if any, time editing the csproj manually. Once everything is complete we will be in a better place to judge for ourselves. I can’t help but assume things will end up being slower overall and that I’ll end up missing the ease of the project.json file.

Update 23rd Nov 2016

After reviewing https://blogs.msdn.microsoft.com/dotnet/2016/10/19/net-core-tooling-in-visual-studio-15/ I can see that the format for the new csproj that I have experienced in VS 2017 RC doesn’t match the example of what I presume is the end goal. So it’s entirely possible that the structure will evolve even more before release. The example in the MSDN blog does look cleaner and much less cluttered so I will watch this space with interest.

Read More