Migrating from .NET Framework to .NET Core The journey of re-targeting an ASP.NET Core application onto .NET Core

What better way to start the new year than with some coding? In my case I’d found a bit of time during the end of my Christmas vacation to tackle an issue on allReady I’d been wanting to work on for a while. Before getting into the issue, if you want to read more about what allReady is you can read my earlier blog post where I cover contributing to it in greater detail.

allReady was first created during the early beta’s of ASP.NET Core (at which point it was known as ASP.NET 5). At that time, due to some dependencies which did not yet support .NET Core, the application was built on top of the full .NET framework 4.6.

The first thing to discuss briefly is why and how we can run ASP.NET Core on the full traditional .NET framework. While it might be reasonable to assume that ASP.NET Core naturally needs to run on the new .NET Core framework, remember that .NET Core ultimately includes a subset of the larger full .NET framework API. Because of this, it is perfectly possible to run ASP.NET Core on the original 4.x version of the .NET framework. This is an advantage to those that want to make use of the new features and optimizations within ASP.NET Core, while also still utilising the libraries they know and love. This is a perfectly reasonable way to run ASP.NET Core but with an important limitation. By targeting the .NET framework you are not able to develop or host the ASP.NET application outside of Windows.

For the allReady project this had started to become a bit of a limitation. We had Mac users who were keen to contribute to the project, but were unable to do so without installing and running a Windows VM on their device. As we were getting to the end of some complex requirements for allReady’s v1 milestone, I took the opportunity to spend some time looking at retargeting the application to run on .NET Core. This turned out to be a bit more involved that I had first anticipated. However, we have succeeded and the code base has been proven to build and run on both Mac and Linux devices.

Updating the Solution

The first part of the process was to update the project.json file inside the web solution. The main update here was changing the framework specification to netcoreapp from net46.

project.json before:

project.json after:

We had decided as a team to continue to track the .NET Core 1.0.x LTS support stream. In short, this is a stable version of the framework, under full Microsoft support. New minor patch releases are appearing to fix major bugs or security issues, but otherwise the changes are expected to be quite limited. Our project was already targeting the latest 1.0.3 SDK and latest 1.0.x libraries for each of the components.

You’ll see above that as well as changing the framework moniker I’ve added an imports section as well. This was required in our case as we have some dependencies on some libraries which don’t yet specify a dotnet Target Moniker or .NET Standard version. It’s essentially there for backwards compatibility until the ecosystem settles down.

Handling Dependencies

Upon making the required changes to project.json, it became apparent that some of the packages we depend on could no longer be restored. The first thing I did was to use Nuget Package Manager to update all of the dependencies to their latest versions to see if any of those had been updated to support .NET Core. This helped in some of the cases but still left me with a few that were not compatible. For the time being I commented those out as well as commenting out any code using those dependencies. In our case this included the following main libraries we had issues with…

  • XZing which was being used in one place to generate a QR code image for use within the allReady phone application. This had its own dependency on System.Drawing which is not part of .NET Core.
  • LinqToTwitter which was being used in one area of the application to query for a user’s information after a sign in via Twitter.
  • GeoCoding.net which was used in a few places in order to take an address and geo code it to latitude and longitude coordinates.

With those taken care of for now (if only commenting out code was considered “fixing”), the last remaining issue for the web project was a dependency on a separate library project in our solution which contains some shared models. This library is used by both our web application and also in some Azure web jobs we have defined. As a result it needed to support both .NET Core and the .NET Framework. The solution here was to convert it to a .NET standard library. In my case I chose the lowest standard version I could find which gave me the API surface I needed, while supporting my dependent projects. This was the .NET Standard 1.3 version. The project.json for this library was changed as follows.

Before:

After:

With this done, I also had to update our test library project.json in a similar way in order to support .NET Core by changing the frameworks section and updating to the latest versions of our dependencies as with the main web project. The changes in this file were very similar to what I’ve already shown, so I won’t repeat the code here.

Wrapping missing dependencies

With the package restore now succeeding for the web project, test project and our .NET standard project my next goal was to find solutions to the libraries we could no longer use.

XZing was an easy decision as currently the phone application project is a bit stale and it was decided that we could simply remove the QRCode endpoint and this dependency as a result.

When I reviewed the use of LinqToTwitter it was only really abstracting one API call to Twitter behind a more fluent Linq syntax. We were using it to request a Twitter user’s profile information. The option I chose here was to write our own small service to wrap the call to the Twitter REST API which using HttpClient. There are some hoops to jump through in order to establish the authentication with Twitter but this ultimately didn’t prove too difficult. I won’t go into the exact details here since this post is intended to cover the general “migration” process.

The final dependent library that wasn’t able to support .NET Core was GeoCoding.net. We were using this library to perform address to coordinate lookups via Google. On review of the usages, it was again clear that we had a whole dependency for what amounted to a single call to a Google endpoint. We were already calling another part of the Google REST API directly from a custom service class, so again, the decision was to wrap up the geocoding functionality in our own code, calling the API directly. Again, the exact details are beyond my scope here, but I may cover this in a future blog post if there’s interest.

Handling API Differences

With the problem packages removed I tried to compile the solution. I was immediately hit by a wall of errors. This was initially a little daunting, but after looking through them I could see that many were quite similar and due to changes to the .NET Core API surface.

Reflection

The first of which was where we’d been using reflection. We have a couple of places in the web app that rely on reflection for wiring up our IoC container and also in some of the test classes. In .NET Core the Object.GetType() method no longer returns full type detail information, instead it returns just the type name. In order to access the full type information there is a new extension method called GetTypeInfo(). It returns TypeInfo which is similar to what Type used to return.

Before:

After:

In this example you can see that the only change is adding in the GetTypeInfo call.

Date Formatting

We also had some code calling ToLongDateString on dates. This is no longer avaiable so I switched to formatting the date using the ToString method instead.

Before:

After:

Exceptions

In a couple of places we were throwing ApplicationException which also seems to have been removed. I switched to a general Exception in this case.

Before:

After:

Final Tweaks

Once all of the compile errors were fixed we were nearly there. In order to enable the site to be run under the non IIS mode of Kestrel I updated the profiles in launchSettings.json

Before:

After:

This new profile basically tell VS to run the project (i.e. via dotnet.exe) – launching it on port 5000. Within Visual Studio you can choose whether to start via IIS Express or directly into the project via the dotnet.exe. New projects targeting .NET core include these two profiles automatically, so updating it in our project felt sensible, to make the developer experience familiar.

Conclusion

With the above steps completed the project was now compiling, tests were running and the site worked as expected. Quite a few hours of work went into the process. The longest time was spent building the new service wrappers around the 3rd party REST APIs so that I could get rid of incompatible dependencies. The actual project and code changes weren’t too painful and it was just a case of working out the API changes so I knew how to modify our code.

It was an interesting experience and once we had it tested on a Mac and Linux during the NDC London code-a-thon it was very exciting to see it working. This has made it much easier for non-Windows developers to contribute to the allReady project and is just one of the great new benefits of working with .NET Core and ASP.NET Core.

Read More

A Reminder to Take Care when Registering Dependencies

I looked into a “fun” little problem yesterday where we were seeing occasional errors in some of our ASP.NET Core code which calls down into the ASP.NET Core Identity UserManager. We were getting a range of NullReferenceException and ObjectDisposedExceptions as well as various exceptions from Npgsql.NpgsqlConnection (we use Postgres rather than SQL in this project) stating things such as “Connection already open”.

The issue was presenting itself within some authentication and authorisation code we have which wraps and extends the ASP.NET Core Identity UserManager and SignInManager functionality. We use ASP.NET Identity over a PostGres database for our user store but include some application specific functionality with our own code. We have our own AuthenticationManager and UserManager classes, both of which take dependencies on the underlying Microsoft.AspNetCore.Identity classes.

These originally got registered in in the ConfigureServices method of the Startup.cs class as follows:

services.AddSingleton<IAuthenticationManager, AuthenticationManager>();
services.AddSingleton<IUserManager, UserManager>();

The constructor for our AuthenticationManager looks a bit like this:

public AuthenticationManager(UserManager<ApplicationUser> userManager, SignInManager<ApplicationUser> signInManager)
{
   // setup our authentication manager here
}

Do you see the problem?

The issue here is the singleton registration. While our classes themselves have no state and could be shared between requests, the dependencies on Microsoft.AspNetCore.Identity.UserManager<T> and Microsoft.AspNetCore.Identity.SignInManager<T> have to be considered.

If we take a look at where these are registered within the Microsoft.AspNetCore.Identity source we can see the following:

services.TryAddScoped<UserManager<TUser>, UserManager<TUser>>();
services.TryAddScoped<SignInManager<TUser>, SignInManager<TUser>>();

They are added using the scoped lifetime which means they expect to be created once per request. They themselves depend on an IUserStore which is registered with the scoped lifetime as well.

As a result, the singleton registration of our AuthenticationManager was trying to hang onto dependencies to objects for the entire application lifetime, where those dependencies only expected to live for the request scope. Sometimes they seemed to get a different database context and hence the “Connection already open” errors we saw. Sometimes the dependencies had been disposed of by the time they got called by our code and as such we saw the various exceptions being thrown. In some cases we seemed to still have access to the UserManager but the context underneath was null. I won’t dive into this tool deeply, but it was apparent that we should really make the registration of our classes scoped as well. This way, they are created once per request, the same as their dependencies. We changed the registration code to the following:

services.AddScoped<IAuthenticationManager, AuthenticationManager>();
services.AddScoped<IUserManager, UserManager>();

This resolved the errors we were seeing immediately. It was an annoying oversight although fortunately it was fairly easy to guess at the cause. The various errors all suggested that we having some issues with the lifetime of the  dependencies. This case has been a reminder to carefully consider the lifetimes of service registrations as it can produce some unexpected errors and behaviour.

The Microsoft documentation even includes a large warning about scoped services – “The main danger to be wary of is resolving a Scoped service from a singleton. It’s likely in such a case that the service will have incorrect state when processing subsequent requests.” – Oh how true this is!

Happy dependency injecting!

Read More

Updating an ASP.NET Core Site to the December 2016 Release How to upgrade a site on the LTS 1.0.3 version of ASP.NET Core

I run into an issue this week during what should have been a simple ASP.NET Core application update. I wanted to share my experience in case others run into similar problems. Also, I’m sure to be back here myself to remember this in the future!

On December 13th Microsoft released their second minor patch release for the LTS (Long Term Support) track of .NET Core. ASP.NET Core releases on two tracks depending on how cutting edge you want to be. LTS is the “safer” track, which will be supported and bug fixed during the support lifespan. The other track is FTS (Fast Track Support) which will be where new features appear. You can read more about this on the Microsoft Blog.

As you may be aware from reading my other posts, I’m contributing to an opensource charity project called allReady. We’re currently using the LTS track packages and at the time of writing still targeting the full .NET framework (as opposed to .NET Core). We had applied the last patch release 1.0.1 packages in September without any major problems so I was hoping for the same experience with this patch release.

The details for the release were made available in this Microsoft blog post. If you follow the links to the release notes you will see that the ASP.NET Core updates are considered version 1.0.3. This is where the versioning starts to get a little murky in my opinion. ASP.NET Core itself has a version number (now 1.0.3) which tracks general “releases” of the framework. However, the individual packages that actually make up .NET Core and ASP.NET Core also have version numbers and revisions. Those numbers don’t track with the main release version, so it starts to get a bit confusing. You won’t for example find a package for Microsoft.AspNetCore.Mvc at version 1.0.3. The latest for that package is 1.0.2.

I’ll now step through how I upgraded our project and then discuss the issue I experienced with the EF commands for entity framework. Before starting to update the project I made sure to install the latest version of the 1.0.3 SDK from the Microsoft website.

Update Package.json

This is where the first pain point came for me. It wasn’t listed specifically in the blogs posts or release notes all of the package which had updated and what the latest package versions were. So my initial solution was to turn to the VS Nuget Package Manager where I was hoping I could simply update all of the Microsoft packages to the latest versions. However, since the package manager lists the latest (non pre-release) versions, it was offering me the FTS 1.1.x versions. So a simple, upgrade all option was out of the question.

Next I went into the project.json manually planning to update each package by hand, allowing autocomplete to give me the latest versions. However autocomplete didn’t always seem to pick up the latest version number for me automatically and I was worried about missing something. So I reverted back to the Nuget Package Manager and went one by one through the Microsoft packages. I used the install dropdown to select the newest LTS version 1.0.x for each one. This was slow and manual but at least meant I knew what options I had and could be explicit in choosing the latest version i wanted.

Here’s a rundown the packages from our project.json that I needed to update and the versions number they are are now on (which should be the latest LTS release). Note that our project.json may well differ for newly generated projects so you may not have all of these packages and you may even have dependencies listed that we do not.

"Microsoft.EntityFrameworkCore.SqlServer": "1.0.2",
"Microsoft.EntityFrameworkCore": "1.0.2",
"Microsoft.ApplicationInsights.AspNetCore": "1.0.2",
"Microsoft.AspNetCore.Mvc": "1.0.2",
"Microsoft.AspNetCore.Mvc.TagHelpers": "1.0.2",
"Microsoft.AspNetCore.Authentication.Cookies": "1.0.1",
"Microsoft.AspNetCore.Authentication.Facebook": "1.0.1",
"Microsoft.AspNetCore.Authentication.Google": "1.0.1",
"Microsoft.AspNetCore.Authentication.MicrosoftAccount": "1.0.1",
"Microsoft.AspNetCore.Authentication.Twitter": "1.0.1",
"Microsoft.AspNetCore.Diagnostics": "1.0.1",
"Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Server.IISIntegration": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.2",
"Microsoft.AspNetCore.StaticFiles": "1.0.1",
"Microsoft.AspNetCore.Cors": "1.0.1",
"Microsoft.Extensions.Configuration.Abstractions": "1.0.1",
"Microsoft.Extensions.Configuration.Json": "1.0.1",
"Microsoft.Extensions.Configuration.UserSecrets": "1.0.1",
"Microsoft.Extensions.Logging": "1.0.1",
"Microsoft.Extensions.Logging.Console": "1.0.1",
"Microsoft.Extensions.DependencyInjection.Abstractions": "1.0.1",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.1",
"Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.1",
"Microsoft.AspNetCore.Mvc.WebApiCompatShim": "1.0.2",
"Microsoft.Extensions.Logging.Debug": "1.0.1",

I also had to update the dependencies for some of these in our test library, so remember to check there too.

To complete the upgrade I also adjusted the SDK version in our solution’s global.json as follows…

"sdk": {
  "version": "1.0.0-preview2-003156",
  "runtime": "clr",
  "architecture": "x86"
},

With the above changes made I was able to build and run our site via Visual Studio. Great!

It wasn’t until a couple of days later when I hit an issue. During an issue I was working on I’d updated our model classes Entity Framework and needed to build my next entity framework migration. I did this by running the usual command…

dotnet ef migrations add AddNotifications

After building the project I was faced with the following error:

Could not load file or assembly 'Microsoft.EntityFrameworkCore, Version=1.0.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

At this stage I tried a few things, none of which fixed the problem outright, although they may have contributed to the overall solution. Firstly I cleaned my solution and rebuilt, no joy. Then I wondered if Nuget had cached any incorrect versions of the package, so I cleared my local Nuget cache and tried restoring my project dependencies again. Still no joy! Finally I hopped onto the ASP.NET Core Slack channel and sought help there. It was with huge thanks to Chad Tolkien then he suggested a manual deletion of my bin and obj folders within the project. I did that and rebuilt the solution. Success! Finally I was able to generate a migration using the EF CLI tooling. So it seems the clean and restore steps previously hadn’t cleaned everything they needed to.

I’d love to know if there’s a better way to manage these updates currently? I’m hoping that with the final tooling release and VS 2017 things will get easier. It would be useful for example, to be able to choose which track you want to use within Nuget Package Manager. I’m not sure how that would be achieved exactly but it would distinguish the packages you really want to get the latest within your chosen support track. It would also be handy if Microsoft blog posts about each release include specific details of each updated package and it’s latest version number. Having a quick reference when updating dependencies would have made my life a little easier. There are some release notes which hint at the main packages and their new version number, but it didn’t include all components that I ended up changing.

Read More

Custom ModelBinding in ASP.NET MVC Core How to HTML Encode Strings

In a previous post I showed how we could automatically HTML encode data when deserialising it from a JSON request body. In my case this was to meet some specific security requirements we had for an ASP.NET Core API we were building. This time around I will discuss a similar requirement which stated that we should also ensure we HTML encode any strings bound from the route, querystring and form data. We’ll do this by creating a custom ModelBinder.

Much like our earlier requirement, we needed to ensure that we are not storing un-encoded data containing HTML or script tags in our database. While our application escapes the data on the way out, we want to prevent anyone accidentally rendering un-escaped HTML in future applications. We have no requirement to accept HTML code on our API so we decided to ensure each value was encoded by default during binding. As before we wanted to enable this globally so that no developer would have to take specific steps to enable this per controller or action.

Model Binding Flow

Before I jump into the solution, I’ll firstly explain at a high level how the model binding flow works in ASP.NET Core MVC. To understand this better I ended up following steps from this blog post where I discuss how we can add the MVC source solution to our code, allowing us to debug into it. With this in place I was able to step through the model binding code to watch and understand what was happening.

Model binding in principle is quite straight forward. It attempts to match up values coming in on the request to any properties expected by the parameters of the controller and action. Each value is run through the model binding flow which looks to find a suitable binder to handle the value. To find a suitable binder, ASP.NET MVC Core uses binder providers. These providers are registered when MVC is initialised and by default includes 14 different providers. Each of these implements the IModelBinderProvider interface.

There is a provider to handle key value pairs for example, another for complex types and another for simple types. These binders are registered in a specific order and MVC checks each provider in that order during the binding process until it find the first provider which can provide a suitable binder for the object being bound. Each binding provider will have some conditions that check the binding provider context. Once these conditions are met, the provider will return a binder that can handle the binding.

Once we have a model binder available it’s BindModelAsync method is called. This method expects to handle the value of the object being bound and returns a ModelBindingResult. If the binding succeeds as expected then a Success is returned, including the final value to be bound to a property. I hope to spend more time exploring the binding process in a future post. It’s a little beyond the scope here to explain the deeper details of how everything is hooked up. For now, let’s look at how we meet the requirement above to url encode values during the binding process.

Let’s assume we have the following action.

We want to ensure that the newTitle property is HTML encoded by the time we can access it within the action.

Creating a ModelBinder

The starting point for our solution was to create a model binder based on the IModelBinder interface.

We initialize the binder passing in an IModelBinder as a fallback. In my case I am specifically expect to handle string values but I don’t want to worry about the cases for handling null or empty strings. That is already covered in the default SimpleTypeModelBinder provided in MVC. Therefore I chose to pass in a binder which I will pass the responsibility onto in those cases.

Within the BindModelAsync method we first call the the ValueProvider.GetValue method on the value provider in the bindingContext. We pass in the model name which returns a valueProviderResult. As long as there is a value in the result we first check if it’s null or empty. If so, this is where we call the fallback binder’s BindModelAsync method. If we do have a suitable value then we proceed to HTMLEncode it before creating a success ModelBindingResult. Finally we return a completed task using the internal helper TaskCache.CompletedTask.

Creating a ModelBinderProvider

Now that we have a model binder defined we need to create a provider which will determine if the modelbinder is suited to the object being bound. Here is the code…

This provider is pretty simple. We must implement the GetBinder method on the IModelBinderProvider interface. We use the MetaData on the ModelBinderProviderContext to determine if it can provide a suitable binder. This meta data includes some helper properties such as the IsComplexType flag to help us determine if we can provide binding for the object. In this case we are looking only for strings. If the object being bound is a string, then we can return our custom HtmlEncodeModelBinder. Otherwise we return null. When we return null the next binding provider will be given the chance to provide a binder. This continues until a suitable binder is found. You’ll notice that we pass in a new SimpleTypeModelBinder which will act as our fallback for any string null or empty cases we encounter during binding.

Now that we have a binder and a provider, the final step is to add this provider to the list of binding providers. Since these are executed in a set order we also need to place our provider in the right place. I’ve achieved this using the following extension to the MVC options.

What we are doing here is using LINQ to find the SimpleTypeModelBinderProvider in the list of ModelBinderProviders. Our binder provider need to run just before the simple type binder to ensure that we can have the opportunity to handle the string types with our HTML encoding logic. If we placed it after the SimpleTypeModelBinderProvider we would find that the binding flow never reached our code as the binding would already have been handled. We then get the index of the SimpleTypeModelBinderProvider and using that index we insert our binder provider into the list at that position. Now, when MVC binding occurs, our binder provider will be part of the binding process. If we inspect the ModelBinderProviders during debug it should now look like this:

ASP.NET Core ModelBinders List

You can see our new custom HtmlEncodeModelBinderProvider listed before the SimpleTypeModelBinderProvider.

Finally, with everything complete we can do the final wiring up. We call our options extension when adding MVC within the ConfigureServices method in Startup.cs

Now when the newTitle parameter is bound on our example action it will ensure that any html tags are safely encoded. As with all of my posts, I’ve taken the best approach I could think of to implement this. I welcome any comments and suggestions to improve this code or correct any mistakes!

Read More

R.I.P project.json – Out with the new, in with the old An early look at the VS 2017 csproj changes

Okay; the title of the post is a bit tongue in cheek! Since it was first announced that the new project.json format (developed by the ASP.NET Core team) was going to be retired in favour of the more traditional csproj file, the .NET community have had some very strong opinions. These have been shared on Twitter, in blog posts, on GitHub and via any other channels people could find. I will admit that when I heard about the change, I initially had some of the same views and concerns. Why would Microsoft take away my beloved project.json? However, I decided to wait it out and see what Microsoft actually produced before passing judgement.

A little bit of history

Let me take a step back here and try to explain a little bit of the history as I have understood it. In the early beta timeframe for ASP.NET Core (then called ASP.NET 5) the ASP.NET and .NET teams were working independently of one another. The .NET team were working heavily on the API surface for .NET Core, whilst the ASP.NET team were working on the ASP.NET Core application platform on top of the API the .NET team were making. The tooling to work with ASP.NET Core was in an early preview and at the time we had DNVM and DNU commands. These have since been consolidated and morphed to form the dotnet CLI.

In order to support development of ASP.NET projects, the team decided to take the opportunity to develop a brand new project file format. One which would be simpler and address many of the issues the community had experienced when working with csproj based projects. At the time, nothing existed specifically for .NET Core projects and it was felt that since ASP.NET Core represented a new beginning for the platform, it was as good a time as any to make changes. During the betas and release candidates people started to get their hands on the new project.json and xproj based solutions when building their web applications. The response was indeed very positive. project.json provided autocomplete of dependencies, ease of reading and a much simpler overall structure.

Personally, I was very pleased when project.json was born. I could easily understand it and it became a regular reference point to review my dependencies and manage my projects.

However, it was during the release candidate period that decisions started to get made around a wider .NET project format for the other application types. Customers were starting to share their concerns around project.json and the fact that it was not supported within MSBuild. What would this mean for large monolith projects, how would they even begin to migrate them to .NET Core? And, being fair, these were valid concerns. Starting with greenfield ASP.NET Core projects was fantastic. There was no need to concern ourselves with the past and the new new project.json was easy to get to grips with. But, working with an existing project would not necessarily be a simple conversion without some degree of work. So the announcements started to be made that project.json would be retired in favour of a more backwards compatible MSBuild ready, csproj format. It was too late in the day for this work to be completed before the RTM of ASP.NET Core 1.0 but we were warned that it would be worked on for release alongside Visual Studio 2017.

And that brings us to today, with the announcement of the release candidate for Visual Studio 2017 we can now get our hands on projects using the new (or should I say old) csproj project format. I’ve taken an early peek at the file to see what has changed.

Comparing project.json (VS 2015) with csproj (VS 2017 RC)

Microsoft have assured us while working on the change that they expected to port many of the improvements that the project.json file gave developers over to the improved csproj format.

Comparing project.json and the new csproj

In the screenshot above (hard to see due to the size) I’ve opened a default new Web Application project in VS 2015 (left) and VS 2017 (right). Side-by-side the obvious difference in that project.json is JSON and the csproj is XML. You will notice that the line length of the files is not too different. project.json comes in at 65 lines and csproj comes in at 77 lines. Already that is a significant reduction in lines from a traditional csproj file.

Here are the complete files:

The other thing which stands out to me is the readability. Despite the work Microsoft have done to reduce noise in the csproj file, I still find it much harder to scan than the project.json. The XML tags draw my eye away from the detail that I actually want to consume.

At the time of writing I’m still very new to working inside VS 2017 and indeed on my computer it’s proofing quite buggy. For example, a solution that I created inside VS 2017 and which had been working fine (1 web application and 1 test class library) no longer re-opens after I saved and closed it. Suffice to say, I won’t be using Visual Studio 2017 for production work at this time and may even remove it entirely as it seems to be affecting my experience inside VS 2015 too.

Once nice advantage of the csproj file over the project.json is that we can now include references to full (non-core) .NET projects as well. Previously the integration was not available.

Something I do miss however (unless it’s just not working for me) is proper autocomplete of the packages and versions. With the project.json I could start typing the name of a dependency and Visual Studio would show me potential autocomplete options. I could also choose from the available versions. This seems to be absent in the csproj file. Personally I only saw basic intellisense for some, but not all, of the xml tags.

Migrating from project.json

Migration from project.json can be achieved in one of two ways. These easiest is to open the existing xproj/project.json project inside Visual Studio 2017. The IDE will detect the project format and prompt for a migration. The other option is to run the dotnet migrate command directly in the command line. This should result in the same conversion.

Migration from project.json

In a quick test for me, the VS migration seemed to work quite well, although that was a single default web application project with basic dependencies. In theory it should work just as well for larger projects. Perhaps when we come to migrate the allReady project which I contribute to, we’ll see if there’s more to it.

Having done this with the project.json solution created in VS 2015 (as appeared above) we get the following csproj output.

Inside the project.json we have our main dependencies section and inside that we define the package and version. This is pretty clear and easy to read. Inside the csproj each package reference is added inside an ItemGroup element. It’s more verbose than the project.json.

Much of the content is similar and when you compare the files you can generally find where each part has been migrated. However you’ll need to remember the more complex XML schema in order to edit the file manually. Indeed, Microsoft seem to be pushing us away from this in favour of IDE tooling and the command line. I fear though, that those are never going to be as quick to work with as we’ve been used to with a quick edit of the project.json file.

Differences between the original csproj and the new csproj

There are some other notable changes between what we see in a traditional csproj file when compared to the newer csproj file. Firstly they have gotten rid of the use of GUIDS within the file, so that it is much more human readable. That’s a good change and one I’m happy to see.

We are no longer required to include all of the files within the csproj file in order for them to be considered part of the project. project.json gave us a much more favourable, include by default, behaviour and this has now been made available inside csproj. It’s ever so slightly different as we must use a wildcard style include line to tell the project to behave this way. New projects created inside VS 2017 have this by default. When working with teams of developers, the csproj file was a common area of merge conflicts. We should no longer have so many issues since the project file will not change simply because we’ve added new files to the project.

Package references are now integrated into csproj so this single file also includes any libraries our project is dependent on and any projects we reference. It’s nice to have everything in one place as we’ve become used to with the project.json format.

Another important change is that we can now edit the csproj file whilst the project is loaded. This was not previously possible and meant that any manual changes were slow to make as we had to first unload the project. Granted, it was rare that I wanted to manually edit the file in the full .NET framework days. To edit the csproj file we can simply right click on the project without unloading it first.

Edit csproj inside VS 2017

One gripe at this point that I experienced is a warning for inconsistent line endings every time I edit the csproj file.

Inconsistent line endings

Other project differences in VS 2017

As well as the direct differences inside the file, there’s a few initial tooling differences I wanted to highlight between VS 2015 and VS 2017 that I’ve noticed.

The main one I’ve seen so far is the consolidated dependencies folder. Before, we had a “References” folder which included any dependent libraries we added via Nuget and any project references. We then had a “Dependencies” folder which included Bower dependencies. Inside Visual Studio 2017 these have been placed together inside a single “Dependencies” container. Project references have their own sub-container to separate them from the Nuget dependencies.

dependenciesfoldervs2017_1

dependenciesfoldervs2017_2

Initial Impressions

I definitely don’t find the XML as readable as the JSON for reviewing the configuration of my project. This will improve with time as I get used to it, but there’s still too much noise in the file for my liking. It’s certainly a big improvement on the traditional csproj format though and the wildcard include of files is an important enhancement.

Personally, I still much prefer the project.json format and whilst I understand the business case the led Microsoft to their change of course, I feel it’s a shame the MSBuild system couldn’t have moved with the times, rather than dropping back to the xml based project file.

The lack of autocomplete for dependencies bothers me too. This was really handy and I could quickly build up my dependencies without having to open up the Nuget package manager each time. It feels like we’re now being forced down that route which proves to be a bit slower and less efficient.

It’s still very early days and I need to spend more time with the new version of csproj in a final stable version of VS 2017 before passing a final judgement. We also have to remember that the .NET SDK tooling is not quite fully baked yet either. At this early point I’m running into quite a few issues running VS 2017 so it’s hard to fully appreciate the final experience.

The Microsoft statements suggest that in reality the tooling and command line interface should negate the need to spend much, if any, time editing the csproj manually. Once everything is complete we will be in a better place to judge for ourselves. I can’t help but assume things will end up being slower overall and that I’ll end up missing the ease of the project.json file.

Update 23rd Nov 2016

After reviewing https://blogs.msdn.microsoft.com/dotnet/2016/10/19/net-core-tooling-in-visual-studio-15/ I can see that the format for the new csproj that I have experienced in VS 2017 RC doesn’t match the example of what I presume is the end goal. So it’s entirely possible that the structure will evolve even more before release. The example in the MSDN blog does look cleaner and much less cluttered so I will watch this space with interest.

Read More

Loading Pin on Bing Maps from ASP.NET Core MVC Data

As I’ve covered a few times previously in my blog I’m really enjoying working on the allReady project which is run by the Humanitarian Toolbox non-profit organisation. One of the great things about this project from a personal perspective is the chance to learn and develop my skills, whilst also contributing to a good cause.

Recently I picked up an issue which was not in my normal comfort zone, where the requirement was to load a Bing map showing a number of pins relating to request data coming from our MVC view model. I’m not very experienced with JavaScript and tend to avoid it whenever possible, but in this case I did need to use it to take data from my ASP.NET Core model to then populate the Bing maps SDK. As part of the requirement we needed to colour code the pins based on the status of the request.

My starting point was to have a look at the SDK documentation available at http://www.bing.com/api/maps/mapcontrol/isdk. After a bit of reading it looked possible to meet the requirement using the v8 SDK.

The first step we to update our Razor view page to include a div where we wanted to display the map. In our case I had decided to include a full width map at the bottom of the page so my containing div was as follows:

<div id="myMap" style="position:relative;width:100%;height:500px;"></div>

The next step was to include some JavaScript on the page to use the data from our view model to build up the pushpin locations to display on the map. Some of the existing map logic we have on the allReady project is stored inside a site.js file. This means we don’t need to include too much inline code on the page itself In my case the final code was as follows:

@section scripts {
    <script type='text/javascript'
            src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap'
            async defer></script>
    <script type='text/javascript'>
        function GetMap() {
            renderRequestsMap("myMap", getLocationsFromModelRequests());
        };

        function getLocationsFromModelRequests() {
            var requestData = [];
            @foreach (var request in Model.Requests){
                @:var reqData = {lat:@request.Latitude, long:@request.Longitude, name:'@request.Name', color:'blue'};

                if (request.Status == RequestStatus.Completed)
                {
                    @:reqData.color = 'green';
                            }

                if (request.Status == RequestStatus.Canceled)
                {
                    @:reqData.color = 'red';
                }

                @:requestData.push(reqData);
                        }
            return requestData;
        }
    </script>
}

This renders two script blocks inside our master _layout page’s scripts section which is rendered at the bottom of the page body. The first script block simply brings in the Bing map code. The second block builds up the data to pass to our renderRequestsMap function in our site.js code (which we’ll look at later).

The main function here is my getLocationsFromModelRequests code which creates an empty array to hold our requestData. I loop over the requests in our MVC view model and first load the latitude and longitude information into a JavaScript reqData object. We set a name which will be displayed under the pin using the name of the request from our Model. We also set the color on this object to blue which will be our default colour for pending requests when we draw our map.

I then update the color property for our two other possible request statuses. Green for completed requests and red for cancelled requests. With the object now complete we push it into the requestData array. This forms the data object we need to pass into our renderRequestsMap function in our site.js.

The relevant portions of the site.js that result in producing a final map are as follows:

var BingMapKey = "ENTER_YOUR_KEY_HERE";

var renderRequestsMap = function(divIdForMap, requestData) {
    if (requestData) {
        var bingMap = createBingMap(divIdForMap);
        addRequestPins(bingMap, requestData);
    }
}

function createBingMap(divIdForMap) {
    return new Microsoft.Maps.Map(
        document.getElementById(divIdForMap), {
        credentials: BingMapKey
    });
}

function addRequestPins(bingMap, requestData) {
    var locations = [];
    $.each(requestData, function (index, data) {
        var location = new Microsoft.Maps.Location(data.lat, data.long);
        locations.push(location);
        var order = index + 1;
        var pin = new Microsoft.Maps.Pushpin(location, { title: data.name, color: data.color, text: order.toString() });
        bingMap.entities.push(pin);        
    });
    var rect = Microsoft.Maps.LocationRect.fromLocations(locations);
    bingMap.setView({ bounds: rect, padding: 80 });
}

The renderRequestsMap function takes in the id of the containing div for the map and then our requestData array we built up in our Razor view. First it calls a small helper function which creates a bing map object targeting our supplied div id. We then pass the map object and the request data into addRequestPins.

addRequestPins creates an array to hold the location data which we build up by looping over each item in our request data. We create a Microsoft.Maps.Location object using the latitude and longitude and add that to the array (we’ll use this later). We then create a Microsoft.Maps.Pushpin which takes the location object and then a pushpin options object. In the options we define the title for the pin and the color. We also set the text for the pin to a numeric value which increments for each pin we’re adding. That way each pin has a number which corresponds to its position in our list of requests. With all of the pushPin data populated we can push the pin into the map’s entities array.

Once we’ve added all of the pins the final step is to define the view for the map so that it centers on and displays all of the pins we have added. I’ve achieved that here by defining a rectangle using the Microsoft.Maps.LocationRect.fromLocations helper function. We can then call setView on the map object, passing in that rectangle as the bounds value. We also include a padding value to ensure there is a little extra space around the outlying pins.

With these few sections of JavaScript code when we load our page the map is displayed with pins corresponding to the location of our requests. Here is what the final map looks like within our allReady application.

Resulting Bing Map

Read More

Debugging into ASP.NET Core Source Quick Tip - How to debug into the ASP.NET source code

Update 26-10-16: Thanks to eagle eyed reader Japawel who has commented below; it’s been pointed out that there is a release 1.0.1 tag that I’d missed when writing this post. Using that negated the two troubleshooting issues I had included at the end of this post. I’ve updated the tag name in this post but left the troubleshooting steps just in case.

As I spend more time with ASP.NET Core, reading the source code to learn about it, one thing I find myself doing quite often is debugging into the ASP.NET Core source. This makes it much easier to step through sections of the code to work out how they function internally. Now that Microsoft have gone open source with .NET Core and ASP.NET Core, the source is readily available on GitHub. In this short post I’m going to describe the steps that allow you to add ASP.NET Core source code to your projects.

When you work with an ASP.NET Core application project you will be adding references to ASP.NET components such as MVC in your project.json file. This will add those packages as dependencies to your project which get pulled down via Nuget. One cool feature of the current solution format is that we can easily provide a path to the full source code and VS will automatically add the relevant projects into your solution. Once this takes place it’s the code in those projects which will get executed, so you can now debug them as you would any other area of code in your application.

In this post we’ll cover bringing in the main MVC Core source into an ASP.NET Core application.

The first step is to go and get the source code from GitHub. Navigate to https://github.com/aspnet/Mvc and use your preferred method to pull down the source. I have GitHub Desktop installed so I click the “Open in Desktop” link and choose somewhere on my computer to clone the source into.

Clone MVC Core via GitHub Desktop

By default the master branch will be checked out, which will contain the most recent code added by the ASP.NET team. To be able to import the actual projects into our application we need to ensure the version of the code matches the version we are targeting in our project.json. At the time of writing, the most recent release version is 1.0.1 for ASPNetCore.Mvc. The second step therefore is to checkout the appropriate matching version. Fortunately this is quite simple as Microsoft tag the release versions in Git. Once the repository is cloned locally, open a terminal window at the location of the source. If we run the “git tag” command we will get a list of all available tags.

E:\Software Development\Projects\AspNet\Mvc [master ≡]> git tag
1.0.0
1.0.0-rc2
1.0.1
6.0.0-alpha2
6.0.0-alpha3
6.0.0-alpha4
6.0.0-beta1
6.0.0-beta2
6.0.0-beta3
6.0.0-beta4
6.0.0-beta5
6.0.0-beta6
6.0.0-beta7
6.0.0-beta8
6.0.0-rc1
rel/1.0.1
E:\Software Development\Projects\AspNet\Mvc [master ≡]>

We can see the tag we need listed, 1.0.1 rel/1.0.1, so the next step is to checkout that code using “git checkout 1.0.1” “git checkout rel/1.0.1”. You will get a warning about moving into a detached HEAD state from Git. This is because we are no longer directly on the end of a branch. This isn’t a problem since we are interested in viewing code from this point in the Git commit history specifically.

Now that we have the source cloned locally and have the correct version checked out we can add the reference to the source into our project. Open up your ASP.NET Core application in Visual Studio. You should see a solution directory called Solution Items with a global.json file in it.

Standard MVC Core solution

The global.json defines some solution level tooling configuration. In a default ASP.NET Core application it should look like this:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-003131"
  }
}

What we will now do is add in the path to the MVC source we cloned down. Update the file by adding in the path to the array of projects. You will need to provide the path to the “src” folder and use double backslashes in your path.

{
  "projects": [ "src", "test", "E:\\Projects\\AspNet\\Mvc\\src" ],
  "sdk": {
    "version": "1.0.0-preview2-003131"
  }
}

Now save the global.json file and Visual Studio will pick up the change and begin a package restore. After a few moments you should see additional projects populate within the solution explorer.

Solution with MVC Core projects

You can now navigate the code in these files and if you want, add breakpoints to debug into the code when your run the your application in debug mode.

Troubleshooting Tips

As mentioned at the start of this post, these troubleshooting steps should no longer be necessary if you are using the rel/1.0.1 tag. In an earlier incorrect version of this post I had referenced pulling down the 1.0.1 tag which required the steps below to get things working.

While the above steps worked for me on one machine I had a couple of issues when following these steps on a fresh device.

Issue 1 – Package restore errors for the MVC solution

Despite MVC 1.0.1 being a released version I was surprised to find that I had trouble with restoring and building the MVC projects due to missing dependencies. When comparing my two computers I found that on one I had an additional source configured which seemed to be solving the problem for me on that device.

The error I was seeing was on various packages but the most common was an dependency on Microsoft.Extensions.PropertyHelper.Sources 1.0.0. The exact error in my case was NU1001 The dependency Microsoft.Extensions.PropertyHelper.Sources >= 1.0.0-* could not be resolved.

To setup your machine with the source you can open the MVC solution and navigate to the NuGet Package Manager > Package Sources options. This can be found under the Tools > Nuget Package Manager > Package Manager Settings menu.

Add a new package source to the MyGet feed which contains the required libraries. In this case https://dotnet.myget.org/F/aspnetcore-master/api/v3/index.json worked for me.

Package Manager with MyGet feed

Issue 2 – Metadata file could not be found

I hadn’t had this issue in the past when bringing ASP.NET Core Identity source into a project but with the MVC code I was unable to build my application once adding in the source for MVC. I was getting the following error for each project included in my solution.

C:\Projects\WebApplication6\src\WebApplication6\error CS0006: Metadata file ‘C:\Projects\AspNetCore\Mvc\src\Microsoft.AspNetCore.Mvc\bin\Debug\netstandard1.6\Microsoft.AspNetCore.Mvc.dll’ could not be found

The reason for the issue is that the MVC projects are configured to build to the artifacts folder which sits next to the solution file. Our project is looking for them under the bin folder for each of the projects. This is a bit of a pain and the simplest solution I could find was to manually modify the output path for each of the projects being included into my solution.

Open up each .xproj file and modify the following line from

<OutputPath Condition="'$(OutputPath)'=='' ">..\..\artifacts\bin\</OutputPath>

to

<OutputPath Condition="'$(OutputPath)'=='' ">.\bin\</OutputPath>

This will ensure that when built the dll’s will be placed in the expected path that our solution is looking for them in.

Depending on what you’re importing this can mean changing quite a few files, 14 in the case of the MVC projects. If anyone knows a cleaner way to solve this problem, please let me know!

Read More

ASP.NET MVC Core: HTML Encoding a JSON Request Body How to HTML encode deserialized JSON content from a request body

On a recent REST API project built using ASP.NET Core 1.0 we wanted to add some extra security around the inputs we were accepting. Specifically around JSON data being sent in the body of POST requests. The requirement was to ensure that we HTML encode any of the deserialized properties to prevent an API client sending in HTML and script tags which would then be stored in the database. Whilst we HTML escape all JSON we output as standard, we wanted to limit this extra security vector since we never expect to accept HTML data.

Initially I assumed this would be something simple we can configure but the actual solution proved a little more fiddly than I first expected. A lot of Googling did not yield many examples of similar requirements. The final code, may not be the best way to achieve the goal but seems to work and is the best we could come up with. Feel free to send any improved ideas through!

Defining the Requirement

As I said above; we wanted to ensure that any strings that JSON.NET deserializes from our request body into our model classes do not get bound with any un-encoded HTML or script tags in the values. The requirement was to prevent this by default and enable it globally so that other developers do not have to explicitly remember to set attributes, apply any code on the model or write any validators to encode the data. Any manual steps or rules like that can easily get forgotten so we wanted a locked down by default approach. The expected outcome was that the values from the properties on any deserialized models is sanitised and HTML encoded as soon as we get access to objects in the controllers.

For example the following JSON body should result in the title property being encoded once bound onto our model.

{
	"Title" : "<script>Something Nasty</script>",
	"Description" : "A long description.",
	"Reference": "REF12345"
}

The resulting title once deserialized and bound should be “&lt;script&gt;Something Nasty&lt;/script&gt;”.

Here’s an example of a Controller and input model that might be bound up to such data which I’ll use during this blog post.

public class Thing
{
	public string Title { get; set; }
	public string Description { get; set; }
	public string Reference { get; set; }
}

[HttpPost("CreateThing")]
public IActionResult CreateThing([FromBody] Thing theThing)
{
	// Save the thing to the database here
}

In the case of the default binding and JSON deserialization if we debug and break within the CreateThing method the Title property will have a value of “<script>Something Nasty</script>”. If we save this directly into our database we risk someone later consuming this and potentially rendering it out directly into a browser. While the risk is pretty edge case we wanted to cover it off.

Defining a Custom JSON ContractResolver

The first stage in the solution was to define a custom JSON.net contract resolver. This would allow us to override the CreateProperties method and apply HTML encoding to any string properties.

A simplified example of the final contract resolver looks like this:

using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Text.Encodings.Web;

namespace JsonEncodingExample
{    
    public class HtmlEncodeContractResolver : DefaultContractResolver
    {
        protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
        {
            var properties = base.CreateProperties(type, memberSerialization);

            foreach (var property in properties.Where(p => p.PropertyType == typeof(string)))
            {
                var propertyInfo = type.GetProperty(property.UnderlyingName);
                if (propertyInfo != null)
                {
                    property.ValueProvider = new HtmlEncodingValueProvider(propertyInfo);
                }
            }

            return properties;
        }

        protected class HtmlEncodingValueProvider : IValueProvider
        {
            private readonly PropertyInfo _targetProperty;

            public HtmlEncodingValueProvider(PropertyInfo targetProperty)
            {
                this._targetProperty = targetProperty;
            }
            
            public void SetValue(object target, object value)
            {
                var s = value as string;
                if (s != null)
                {
                    var encodedString = HtmlEncoder.Default.Encode(s);
                    _targetProperty.SetValue(target, encodedString);
                }
                else
                {
                    // Shouldn't get here as we checked for string properties before setting this value provider
                    _targetProperty.SetValue(target, value);
                }
            }

            public object GetValue(object target)
            {
                return _targetProperty.GetValue(target);
            }
        }
    }
}

Stepping through what this does – On the override of CreateProperties first we call CreateProperties on the base DefaultContractResolver which returns a list of the JsonProperties. Without going too deep into the internals of JSON.net this essentially checks all of the serializable members on the object being deserialized/serialized and adds JsonProperies for them to a list.

We then iterate over the properties ourselves, looking for any where the type is a string. We use reflection to get the PropertyInfo for the property and then set a custom IValueProvider for it.

Our HtmlEncodingValueProvider implements IValueProvider and allows us to manipulate the value before it is set or retrieved. In this example we use the default HtmlEncoder to encode the value when it is set.

Wiring Up The Contract Resolver

With the above in place we need to set things up so that when our JSON request body is deserialized it is done using our HtmlEncodeContractResolver instead of the default one. This is where things get a little complicated and while the approach below works, I appreciate there may be a better/easier way to do this.

In our case we specifically wanted to only use the HtmlEncodeContractResolver for deserialization of any JSON from a request body, and not be used for any JSON deserialization that occurs elsewhere in the application. The way I found that we could do this was to replace the JsonInputFormatter which MVC uses to handle incoming JSON with a version using our new HtmlEncodeContractResolver. The way we can do this is on the MvcOptions that is accessible when adding the MVC service in our ConfigureServices method.

Rather than heap all of the code into startup class, we chose to create a small extension method for the MvcOptions class which would wrap the logic we needed. This is how our extension class ended up.

public static class MvcOptionsExtensions
{
    public static void UseHtmlEncodeJsonInputFormatter(this MvcOptions opts, ILogger<MvcOptions> logger, ObjectPoolProvider objectPoolProvider)
    {
        opts.InputFormatters.RemoveType<JsonInputFormatter>();

        var serializerSettings = new JsonSerializerSettings
        {
            ContractResolver = new HtmlEncodeContractResolver()
        };

        var jsonInputFormatter = new JsonInputFormatter(logger, serializerSettings, ArrayPool<char>.Shared, objectPoolProvider);

        opts.InputFormatters.Add(jsonInputFormatter);
    }
}

First we remove the input formatter of type JsonInputFormatter from the available InputFormatters. We then create a new JSON.net serializer settings, with the ContractResolver set to use our HtmlEncodeContractResolver. We can then create a new JsonInputFormatter which requires two parameters in its constructor, an ILogger and an ObjectPoolProvider. Both will be passed in when we use this extension method.

Finally with the new JsonInputFormatter created we can add it to the InputFormatters on the MvcOptions. MVC will then use this when it gets some JSON data on the body and we’ll see that our properties are nicely encoded.

To wire this up in the application we need to use our options from the configure service method as follows…

public void ConfigureServices(IServiceCollection services)
{
    var sp = services.BuildServiceProvider();
    var logger = sp.GetService<ILoggerFactory>();
    var objectPoolProvider = sp.GetService<ObjectPoolProvider>();

    services
        .AddMvc(options =>
            {
                options.UseHtmlEncodeJsonInputFormatter(logger.CreateLogger<MvcOptions>(), objectPoolProvider);
            });
}

As we now have an extension it’s quite easy to call this from the AddMvc options. It does however require those two dependencies. As we are in the ConfigureServices method our DI is not fully wired up. The best approach I could find was to create an intermediate ServiceProvider which will allow us to resolve the types currently registered. With access the ServiceProvider I can ask it for a suitable ILoggerFactory and ObjectPoolProvider.

Update: Andrew Lock had written a followup post which describes a cleaner way to configure these dependencies. I recommend you check it out.

I use the LoggerFactory to create a new ILogger and pass the ObjectPoolProvider in directly.

If we run our application and send in our demo JSON object I defined earlier the title on the resulting Thing class will be “&lt;script&gt;Something Nasty&lt;/script&gt;”.

Summary

Getting to the above code took a little trial and error since I couldn’t find any official documentation suggesting approaches for our requirement on the ASP.NET core docs. I suspect in reality our requirement is fairly rare, but for a small amount of work it does seem reasonable to sanitise JSON input to encode HTML when you never expect to receive it. By escaping everything on the way out the chances of our API returning un-encoded, un-escaped data were very low indeed. But we don’t know what another consumer of the data may do in the future. By storing the data HTML encoded in the database we have hopefully avoided that risk.

If there is a better way to modify the JsonInputFormatter and wire it up, I’d love to know. We took the best approach we could find at the time but accept there could be intended methods or flows we could have used instead. Hopefully this was useful and even if your requirement differs, perhaps you will have requirements where overriding an input or output formatter might be a solution.

Read More

Exploring Entity Framework Core 1.0.0 RTM Changes Understanding a breaking change in the update method behaviour between RC1 and RTM

It’s been a while since my last post but finally I’ve found some time to get this one together, albeit a shorter post this time around.

Outside of my day job, when time permits I like to code for the open source charity project, allReady. This ASP.NET Core web application has been developed during the betas of .NET Core through to RC1. Recently with the help of the very knowledgeable Shawn Wildermuth, the project has been upgraded to run against the final 1.0.0 RTM version of .NET core.

In this post I’m going to talk about one specific change in Entity Framework Core 1.0.0 between RC1 and RTM which caused some breaks in our code.

Before diving into the issue, I need to briefly explain the structure of our code. We have been working to move a lot of our database logic in allReady into Mediatr handlers. This has proven to be a great way to separate the concerns and split up the logic. Our controllers can send messages (commands or queries) via Mediatr to perform actions against the database. The controller have no dependencies on the database layers and therefore are nice and slim. If you want to read more about how we’ve used this pattern, I covered Mediatr in my previous blog post. For this post, we’ll be looking at the code in a particular handler. The code we’re looking at is not handler specific, I point it out just in case the class seems a little confusing as to where it fits into our project. We’ll focus in on a few specific lines of code within the hander.

In a number of places within our code we need to handle the creation or update of an record stored in the database. For example, we have the concept of Itineraries. In allReady an itinerary represents a series of work items (requests) that are grouped together in order to be worked on by volunteers.

In our .NET Core RC1 code base we had the following handler:

public class EditItineraryCommandHandlerAsync : IAsyncRequestHandler<EditItineraryCommand, int>
{
	private readonly AllReadyContext _context;

	public EditItineraryCommandHandlerAsync(AllReadyContext context)
	{
		_context = context;
	}

	public async Task<int> Handle(EditItineraryCommand message)
	{
		try
		{
			var itinerary = await GetItinerary(message) ?? new Itinerary();

			itinerary.Name = message.Itinerary.Name;
			itinerary.Date = message.Itinerary.Date;
			itinerary.EventId = message.Itinerary.EventId;

			_context.Update(itinerary);
			await _context.SaveChangesAsync().ConfigureAwait(false);

			return itinerary.Id;
		}
		catch (Exception)
		{
			// There was an error somewhere
			return 0;
		}
	}

	private async Task<Itinerary> GetItinerary(EditItineraryCommand message)
	{
		return await _context.Itineraries
			.SingleOrDefaultAsync(c => c.Id == message.Itinerary.Id)
			.ConfigureAwait(false);
	}
}

This handler is called from both the create and edit POST actions on our itinerary controller and is intended to handle both scenarios. Within the Handle method we first try to retrieve an existing itinerary based on the Id of the itinerary object being passed in as part of our message. If this does not return an existing itinerary we null coalesce and create a new empty Itinerary object. We then set the properties of our itinerary object based on those coming in via the message (populated by the user in the front end admin page). Then we call Update on the EF context, passing in the Itinerary object and finally call SaveChangesAsync to apply the changes to the database.

This is where things broke for us after beginning to use the RTM version of the EF Core library. During RC1 and prior, the Update method would check the value of the key property on the model and if it was determined it couldn’t be an existing record (i.e. an Id of zero in our case) then the Update method marked the object as Added in the dbContext change tracking, otherwise it would be set as Modified.

Between RC1 and RTM, the Entity Framework team have tightened up on the behaviour of the Update method and made it perform a little more rigidly. It now only performs the action implied by its name. Any object passed in will be marked as modified, even those objects with an Id of zero. It’s up to the caller to call this method correctly.

The resulting exception thrown when calling SaveChangesAsync (after adding a new record) when running against RTM code is…

Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions.

Essentially this tells us that we sent in an object marked as modified and EF therefore expected to get a count of 1 row being modified against the database. However, since the id on our new object is zero (this is a new record), it won’t match any existing records in the database and as such, no records are actually updated.

That explains the break we experienced. On reflection, the change makes sense as it avoids any assumptions being made by EF about our intentions. We’re calling update, so it marks the object as modified. We’re expected to use the Add method for new objects.

So, with the problem understood, what do we do about it? There were a number of possible options that I considered when putting in a fix for this issue. I won’t go into great detail here since ultimately I was directed to a very sensible and simple option which I’ll share in a few minutes, but at a high level we could have…

  1. Moved from a single shared handler to two separate handlers, one specifically for creating and one specifically for editing an Itinerary. In that case each handler would know whether to call either Add or Update explicitly. Note that Update in this case would not need to be called since the object is already tracked by the context after we get it from the database – more on that later!
  2. Added some logic within our own code to check if the Id is zero and if so, assume we want to Add the object to the context instead of Update.
  3. Utilised a context extension we have in the project which tries to determine whether to call Add or whether to call Update based on the EntityState of the object. This is similar to option 2, but would allow shared use of similar logic.

All three would have worked to some extent, although not without some possible further issues we’d have needed to address. However after opening an issue on the EF core GitHub repository to try to understand the change, Arthur Vickers suggested a much cleaner solution for our case.

Arthur proposed the following replacement code:

var itinerary = await GetItinerary(message) ?? _context.Add(new Itinerary()).Entity;

itinerary.Name = message.Itinerary.Name;
itinerary.Date = message.Itinerary.Date;
itinerary.EventId = message.Itinerary.EventId;

await _context.SaveChangesAsync().ConfigureAwait(false);

It’s a small but elegant change which actually only touches two lines.

First change:

var itinerary = await GetItinerary(message) ?? _context.Add(new Itinerary()).Entity;

What this now does is first try to get an itinerary as we had before. If we have this then the itinerary variable is set and we can change the values of the properties as required. It’s important to realise at this point that since we queried the db for this object, it’s now already being tracked by the context. As such, if we adjust properties on the object, EF will detect these changes and mark them as modified. We therefore would not need to call Update which has a different intended use case.

If we don’t get an existing record back from the query, then we’re working with a new record. In that case, the code above adds a new empty itinerary object to the context. Add returns an EntityEntry and we can use the .Entity property of that to return the actual entity object (an Itinerary). It is assigned to our local variable and we can then set its properties. Since the Add method on the context has already been called it is already being tracked by the context with an EntityState of added.

Second change:

We can remove the _context.Update(itinerary); line entirely. Since we now have a correctly tracked entity in the context after our first line (either modified or added) we don’t need to try and attach it at this stage. We have re-ordered the logic a little which makes things simpler and cleaner. We can just call SaveChangesAsync() which will send SQL commands to add or update as necessary, based on the change tracking information.

In Summary

This issue highlighted for me personally that I still need to think carefully about how EF works under the covers. I’ve tried to read a lot on EF Core and feel I have a better understanding of how it works at a medium-to-high level. In this case, our code took advantage of behaviour in EF RC1 which was in reality hiding a bit of an issue in our code. I don’t think the code was “bad” exactly, just that as we’ve explored, with a bit of thinking about the change tracking behaviour, we could improve our code. At the time of writing the original code using Update for both the add and edit scenario was valid, although perhaps a little naïve. We relied on EF correctly assessing our intention to mark the object with correct state.

When working with EF I think it’s important to have a basic understanding of how the change tracking works and what it does for us. If we query for a record via the context, than that record starts being tracked. We don’t need to expressly call update since the context is already aware of the object and the change tracker can manage any modified properties during SaveChanges.

Next steps

There is certainly more for me to personally learn about EF and its API in general. For example, in this case I learned about the Entity property that EF exposes on an EntityEntry. Beyond the basics EF core exposes many ways to manage the tracking of entities and those do warrant exploration and experimentation to find the right performance vs complexity balance for each scenario.

The above code still has room for improvement as well. One thing that stands out is that we are performing a db query to get an object, in order to update it and save it. This is slightly inefficient in our case. When building the edit page, we’ve already queried for the object to set the form fields in the UI. On our post we’re then querying again, purely to attach the object in it’s current state to the context. A pattern I’ve started using elsewhere for a more performant update is to manually attach an object and mark it’s properties as modified without the need to query it first. In this case, it may be unnecessarily complex in order to remove a pretty light db query, but as always, it’s worth considering.

My thanks go out to Arthur Vickers for his response to my EF issue. It’s extremely helpful being able to reach out to the team directly as we all learn the nuances of the changes in the .NET core libraries.

Read More

CQRS with Mediatr and ASP.NET Core Implementing basic CQRS with ASP.NET Core

I was first introduced to the Mediatr library when I started contributing to the allReady project. It is now being used quite extensively within that application. It has proven to be very useful in decoupling code and separating the concerns. Contributors to the project have recently worked through a good chunk of the codebase and moved many database commands and queries over to the Mediatr request/response pattern. This is allowing us to move away from a large data access wrapper to multiple handlers that clearly handle one function and which are much easier to maintain. This has led to smaller, more testable classes and made the code easier to read as a result.

CQRS Overview

Before going into Mediatr specifically I feel it’s worth briefly talking about Command Query Responsibility Segregation or CQRS for short. CQRS is a pattern that seeks to separate the code and models which perform query logic from the code and models which perform commands such as an insert or update. In each case the model to define the input and output usually differs. By separating the commands and queries it allows the input/output models to be more focused on the specific task they are performing. This makes testing the models simpler since they are less generalised and are therefore not bloated with additional code. Rather than returning an entire database model, a query response model will usually contain only a subset of a table’s fields and possibly data from many related objects, all needed to form a particular view. The input model for a query may be very small. Commands on the other hand will usually require larger input models which more closely map to a full database table and have slimmer response models. Commands may perform some business logic on the properties in order to validate the object before saving it into a database. By contrast the models used for a query will generally contain less business logic.

As with any pattern, there are pros and cons to consider. Some may feel that the complexity added by having to manage different models may outweigh the benefits of separating them. Also, as with all patterns, the concept can be taken too far and start to become a burden on productivity and readability of the code. Therefore the degree to which one uses the CQRS pattern should be governed by each use case. If it’s not providing value, then don’t use it!

Coming back to the allReady project; the approach taken there has been to separate the querying of data used to build the view models from the commands used to update the database. Queries occur far more often than commands, as each page load will need to build up a view model, often with calls to the database to pull in relevant data. By keeping the queries distinct from the commands we can manage the exact shape of the input as well as the size of the data being returned. Queries need to perform quickly since they have a direct effect on user experience and page loads times. Keeping the models as slim as possible and only querying for the required database columns can help the overall performance.

Back to Mediatr

The Mediatr library provides us with a messaging solution and is a nice fit to help us introduce some concepts from the CQRS pattern into our code. In allReady it has allowed the team to greatly simplify the controllers and in many cases they now have a single dependency on Mediatr which is injected by the built in ASP.NET Core dependency injection. The MVC actions use Mediatr to send messages for the data they need to populate the view (queries) or to perform actions that update the database (commands).

Mediatr has the concept of handlers which are responsible for dealing with a query or command message. A handler is setup to handle a particular message which will contain the input needed for the command or query. A query message will usually need only a few properties, perhaps just an id of the object to query for. A command message may contain a more complete object with all of the model’s properties that need to be updated by the handler.

Using Mediatr with ASP.NET Core

Using Mediatr in an ASP.NET Core project is pretty straightforward. There are a couple of steps required in order to set things up.

Firstly we need to bring in the Mediatr package from Nuget. The quickest way is to use the package manager console by issuing the command “Install-Package MediatR”. At the time of writing the current version is 2.0.2.

Now that we have Mediatr added to our project we need to register it’s classes with the ASP.NET Core Dependency Injection (DI) container. The exact way you do this will depend on which DI container you are using. I’m going to show how I’ve got it working in ASP.NET Core with the default container. I ended up pretty much following a great Gist that I found. It got me started with registering Mediatr and it’s delegate factories so all credit to the author.

Within the Startup.cs class ConfigureServices method I added the following code to register Mediatr.

services.AddScoped<IMediator, Mediator>();
services.AddTransient<SingleInstanceFactory>(sp => t => sp.GetService(t));
services.AddTransient<MultiInstanceFactory>(sp => t => sp.GetServices(t));
services.AddMediatorHandlers(typeof(Startup).GetTypeInfo().Assembly);

First I add the Mediatr component itself. There are also two delegate types for the Mediatr factories which must be registered. The final line calls an extension method which will look through the assembly and ensure that any class which is a type of IRequestHandler or IAsyncRequestHandler is registered. By reflecting through the assembly in this way we avoid having to manually map each handler in DI when we create it.

public static class MediatorExtensions
{
	public static IServiceCollection AddMediatorHandlers(this IServiceCollection services, Assembly assembly)
	{
		var classTypes = assembly.ExportedTypes.Select(t => t.GetTypeInfo()).Where(t => t.IsClass && !t.IsAbstract);

		foreach (var type in classTypes)
		{
			var interfaces = type.ImplementedInterfaces.Select(i => i.GetTypeInfo());

			foreach (var handlerType in interfaces.Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IRequestHandler<,>)))
			{
				services.AddTransient(handlerType.AsType(), type.AsType());
			}

			foreach (var handlerType in interfaces.Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IAsyncRequestHandler<,>)))
			{
				services.AddTransient(handlerType.AsType(), type.AsType());
			}
		}

		return services;
	}
}

The AddMediatorHandlers method first finds all class types in the assembly. It loops through each class and gets it’s interfaces. If any of the interfaces are an IRequestHandler or IAsyncRequestHandler then we add a transient mapping to the services collection.

If you need further details or samples for registering Mediatr with a different DI container I recommend you check out the wiki on Github which contains some setup guidance and links to samples.

Messages and Handlers

The pattern we’ve employed in allReady is to use the Mediatr handlers to return ViewModels needed by our actions. An action will send a message of the correct type to the Mediatr instance and expect a ViewModel in return. All of the logic to handle the DB queries which fetch the data needed to build up the view model are contained within the handler. We also use Mediatr to issue and handle commands for HTTP post/put/delete request actions. These actions will often need to update a record in the database. We send the created/updated object in the message and a handler picks it up, processes it and returns a success or failure result back to the action.

You can also chain Mediatr handlers by having a handler send out it’s own message which allows you to compose queries to get the data you need. For example if you have a handler which reads a user record from a database, this same user model may be needed as part of multiple view models. Rather than code the same database query each time within each handler, you can place your data access query inside a single handler. This handler can then return the user data to any other handler which sends a message for the user data. This allows us to adhere to the don’t repeat yourself principle by writing the code and logic only once. We can also test that logic to ensure that it works as expected and be confident that as everyone uses it they can expect consistent responses.

To create a request message in Mediatr you create a basic class marked as an implementation of the IRequest or IAsyncRequest interface. I try to use async methods for everything I do in ASP.NET Core so I’ll stick to async examples in this post. You can optionally specify the return type you expect from the handler. An async handler will return that object wrapped in a task which can be awaited.

Your message class will define all of the properties expected to be in the message. Here is an example of a basic message which will send an Id out and which expects the response from the handler to be a UserViewModel.

public class UserQuery : IAsyncRequest<UserViewModel>
{
	public int Id { get; set; }
}

With a request message defined we can now go ahead and create a handler that will respond to any messages of that type. We need to make our class implement the IRequestHandler or in my case IAsyncRequestHandler interface, defining the input and output types.

public class UserQueryHandlerAsync : IAsyncRequestHandler<UserQuery, UserViewModel>
{
    public async Task<UserViewModel> Handle(UserQuery message)
    {
        // Could query a db here and get the columns we need.
        
        viewModel = new UserViewModel();
        viewModel.UserId = 100;
        viewModel.Username = "sgordon";
        viewModel.Forename = "Steve";
        viewModel.Surname = "Gordon";

        return viewModel;
    }
}

This interface defines a single method named Handle which returns a Task of your output type. This expects your request message object as it’s parameter.

In my example I’m simply newing up a UserViewModel object, setting it’s properties and returning it. In the real world this would be where I query the database using Entity Framework and build up my view model from the resulting data.

I personally have been in the habit of keeping my request message and my response handler classes together in the same physical .cs file, but you can split them if you prefer. I’m normally keen on keeping one class to one file, but in this case since the two classes are very interrelated I’ve found it quicker to work when I can see both in the same file.

We now have everything wired up so finally it’s time to send a message from our controller.

public class UsersController : Controller
{
    private readonly IMediator _mediator;

    public UsersController(IMediator mediator)
    {
        if (mediator == null)
            throw new ArgumentNullException(nameof(mediator));

        _mediator = mediator;
    }

    [HttpGet]
    [Route("users/{userId}")]
    public async Task<IActionResult> UserDetails(int userId)
    {
        UserViewModel model = await _mediator.SendAsync(new UserQuery { Id = userId });

        if (model == null)
            return HttpNotFound();

        return View(model);
    }
}

The key things to highlight here are the controller’s constructor accepting an IMediatr object. This will be injected by the ASP.NET Core DI when the application runs. What’s very useful is that we can easily mock an IMediatr and it’s response which makes testing a breeze.

The UserDetails action itself expects a user id when it is called. This id gets bound from the route parameter by MVC.

The key line in the code above is where we send the mediator message. We do this by calling SendAsync on the IMediatr object. We send a UserQuery object with the Id property set. This message will now be managed by Mediatr. It will locate the suitable handler, pass it the request message and return the response to our action.

As you can see, this has made our controller very light. The only code left is a basic check to return an appropriate not found response if the response to our Mediatr request is null. That won’t ever be true in my example, but in a real world app if the database doesn’t find an object with the id provided I return null instead of a UserViewModel. This is exactly how I like a controller to be, it’s single responsibility is to send the client a HTTP response of some kind to the user’s request. It doesn’t and shouldn’t need to know about our database or have any concerns with building up it’s view model directly.

Testing

Being good citizens we should always consider the testing process. Testing when using Mediatr and a CQRS style pattern is very simple. My approach has been to ensure that each handler has appropriate unit tests around the handle method testing the logic within. To do this we can new up a Mediatr handler in our test class and then we can call the Handle method direct and run tests on the returned object to verify the result.

[Fact]
public async Task HandlerReturnsCorrectUserViewModel()
{ 
    var sut = new UserQueryHandlerAsync();
    var result = await sut.Handle(new UserQuery { Id = 100 });

    Assert.NotNull(result);
    Assert.Equal("Steve", result.Forename);
}

This is a bit of a contrived example, especially as my handler example really doesn’t perform any logic. However we can test for whatever is necessary on the returned result. You can check out the allReady code on Github to see some real examples of tests around the handlers used there. In those cases we often use an in memory Entity Framework DbContext object so that we can test the handler’s EF query returns the expected data from a known set of test data.

We can also test the controllers very easily by passing in a mock of the IMediatr.

[Fact]
public void UserDetails_SendsQueryWithTheCorrectUserId()
{
    const int userId = 1;
    var mediator = new Mock<IMediator>();
    var sut = new UserController(mediator.Object);

    sut.UserDetails(userId);

    mediator.Verify(x => x.SendAsync(It.Is<UserQuery>(y => y.EventId == userId)), Times.Once);
}

We create a mock IMediatr using Moq and pass that in when instantiating a controller. Here I’ve called the UserDetail action with an Id and verified that a query has been sent to the mediator containing that Id.

If necessary you can setup your IMediatr mock so that you define the data that is returned in response to a message. This can be useful if you want to validate your action’s behaviour to different responses. You can mock up the response object using code such as…

var user = new UserViewModel
{
    viewModel.UserId = 100,
    viewModel.Username = "sgordon",
    viewModel.Forename = "Steve",
    viewModel.Surname = "Gordon",
};

var mediator = new Mock<IMediator>();
mediator.Setup(x => x.SendAsync(It.IsAny<UserQuery>())).Returns(user);

If your controller performs any logic based on the returned object you can now easily specify the different scenarios to test that. Something I often do is to write a test that verifies that when the Mediatr response is null the action sends a HttpNotFound result. In a simple example that can be done in the following way…

[Fact]
public async Task UserDetailsReturnsHttpNotFoundResultWhenUserIsNull()
{
    var mediator = new Mock<IMediator>();

    var sut = new UserController(mediator);

    var result = await sut.UserDetails(It.IsAny<int>());

    Assert.IsType<HttpNotFoundResult>(result);
}

Summing Up

I’ve really taken to the pattern that Mediatr allows us to easily implement. It’s a personal choice of course but my view is that it keeps my controllers clean and allows me to create handlers that have a single responsibility. It keeps things nicely separated as nothing it too tightly bound together. I can easily change the behaviour of a handler and as long as it still returns the correct object type my controllers never care.

As I’ve shown the testing process is pretty nice and if we ensure each handler is tested as well as the controllers, then we have good coverage of the behaviours we expect from the classes. A big bonus is that it already supports ASP.NET Core and is pretty simple to setup with the built-in DI container.

Mediatr also supports a publisher/subscriber pattern which I’ve yet to need in my code. It’s something worth taking a look at though if you need multiple handlers to respond when an event occurs. It’s something that I plan to look into at some point.

I highly recommend trying out the Mediatr library and reviewing the pattern being used on the allReady project. It takes little time to setup and quickly become a comfortable flow when writing code. It’s made me think about what my models are involved in and helped me keep them focused and more robust.

NOTE: This post was written based on RC1 of ASP.NET Core and may not be current by the time RC2 and RTM are released.

Read More