R.I.P project.json – Out with the new, in with the old An early look at the VS 2017 csproj changes

Okay; the title of the post is a bit tongue in cheek! Since it was first announced that the new project.json format (developed by the ASP.NET Core team) was going to be retired in favour of the more traditional csproj file, the .NET community have had some very strong opinions. These have been shared on Twitter, in blog posts, on GitHub and via any other channels people could find. I will admit that when I heard about the change, I initially had some of the same views and concerns. Why would Microsoft take away my beloved project.json? However, I decided to wait it out and see what Microsoft actually produced before passing judgement.

A little bit of history

Let me take a step back here and try to explain a little bit of the history as I have understood it. In the early beta timeframe for ASP.NET Core (then called ASP.NET 5) the ASP.NET and .NET teams were working independently of one another. The .NET team were working heavily on the API surface for .NET Core, whilst the ASP.NET team were working on the ASP.NET Core application platform on top of the API the .NET team were making. The tooling to work with ASP.NET Core was in an early preview and at the time we had DNVM and DNU commands. These have since been consolidated and morphed to form the dotnet CLI.

In order to support development of ASP.NET projects, the team decided to take the opportunity to develop a brand new project file format. One which would be simpler and address many of the issues the community had experienced when working with csproj based projects. At the time, nothing existed specifically for .NET Core projects and it was felt that since ASP.NET Core represented a new beginning for the platform, it was as good a time as any to make changes. During the betas and release candidates people started to get their hands on the new project.json and xproj based solutions when building their web applications. The response was indeed very positive. project.json provided autocomplete of dependencies, ease of reading and a much simpler overall structure.

Personally, I was very pleased when project.json was born. I could easily understand it and it became a regular reference point to review my dependencies and manage my projects.

However, it was during the release candidate period that decisions started to get made around a wider .NET project format for the other application types. Customers were starting to share their concerns around project.json and the fact that it was not supported within MSBuild. What would this mean for large monolith projects, how would they even begin to migrate them to .NET Core? And, being fair, these were valid concerns. Starting with greenfield ASP.NET Core projects was fantastic. There was no need to concern ourselves with the past and the new new project.json was easy to get to grips with. But, working with an existing project would not necessarily be a simple conversion without some degree of work. So the announcements started to be made that project.json would be retired in favour of a more backwards compatible MSBuild ready, csproj format. It was too late in the day for this work to be completed before the RTM of ASP.NET Core 1.0 but we were warned that it would be worked on for release alongside Visual Studio 2017.

And that brings us to today, with the announcement of the release candidate for Visual Studio 2017 we can now get our hands on projects using the new (or should I say old) csproj project format. I’ve taken an early peek at the file to see what has changed.

Comparing project.json (VS 2015) with csproj (VS 2017 RC)

Microsoft have assured us while working on the change that they expected to port many of the improvements that the project.json file gave developers over to the improved csproj format.

Comparing project.json and the new csproj

In the screenshot above (hard to see due to the size) I’ve opened a default new Web Application project in VS 2015 (left) and VS 2017 (right). Side-by-side the obvious difference in that project.json is JSON and the csproj is XML. You will notice that the line length of the files is not too different. project.json comes in at 65 lines and csproj comes in at 77 lines. Already that is a significant reduction in lines from a traditional csproj file.

Here are the complete files:

The other thing which stands out to me is the readability. Despite the work Microsoft have done to reduce noise in the csproj file, I still find it much harder to scan than the project.json. The XML tags draw my eye away from the detail that I actually want to consume.

At the time of writing I’m still very new to working inside VS 2017 and indeed on my computer it’s proofing quite buggy. For example, a solution that I created inside VS 2017 and which had been working fine (1 web application and 1 test class library) no longer re-opens after I saved and closed it. Suffice to say, I won’t be using Visual Studio 2017 for production work at this time and may even remove it entirely as it seems to be affecting my experience inside VS 2015 too.

Once nice advantage of the csproj file over the project.json is that we can now include references to full (non-core) .NET projects as well. Previously the integration was not available.

Something I do miss however (unless it’s just not working for me) is proper autocomplete of the packages and versions. With the project.json I could start typing the name of a dependency and Visual Studio would show me potential autocomplete options. I could also choose from the available versions. This seems to be absent in the csproj file. Personally I only saw basic intellisense for some, but not all, of the xml tags.

Migrating from project.json

Migration from project.json can be achieved in one of two ways. These easiest is to open the existing xproj/project.json project inside Visual Studio 2017. The IDE will detect the project format and prompt for a migration. The other option is to run the dotnet migrate command directly in the command line. This should result in the same conversion.

Migration from project.json

In a quick test for me, the VS migration seemed to work quite well, although that was a single default web application project with basic dependencies. In theory it should work just as well for larger projects. Perhaps when we come to migrate the allReady project which I contribute to, we’ll see if there’s more to it.

Having done this with the project.json solution created in VS 2015 (as appeared above) we get the following csproj output.

Inside the project.json we have our main dependencies section and inside that we define the package and version. This is pretty clear and easy to read. Inside the csproj each package reference is added inside an ItemGroup element. It’s more verbose than the project.json.

Much of the content is similar and when you compare the files you can generally find where each part has been migrated. However you’ll need to remember the more complex XML schema in order to edit the file manually. Indeed, Microsoft seem to be pushing us away from this in favour of IDE tooling and the command line. I fear though, that those are never going to be as quick to work with as we’ve been used to with a quick edit of the project.json file.

Differences between the original csproj and the new csproj

There are some other notable changes between what we see in a traditional csproj file when compared to the newer csproj file. Firstly they have gotten rid of the use of GUIDS within the file, so that it is much more human readable. That’s a good change and one I’m happy to see.

We are no longer required to include all of the files within the csproj file in order for them to be considered part of the project. project.json gave us a much more favourable, include by default, behaviour and this has now been made available inside csproj. It’s ever so slightly different as we must use a wildcard style include line to tell the project to behave this way. New projects created inside VS 2017 have this by default. When working with teams of developers, the csproj file was a common area of merge conflicts. We should no longer have so many issues since the project file will not change simply because we’ve added new files to the project.

Package references are now integrated into csproj so this single file also includes any libraries our project is dependent on and any projects we reference. It’s nice to have everything in one place as we’ve become used to with the project.json format.

Another important change is that we can now edit the csproj file whilst the project is loaded. This was not previously possible and meant that any manual changes were slow to make as we had to first unload the project. Granted, it was rare that I wanted to manually edit the file in the full .NET framework days. To edit the csproj file we can simply right click on the project without unloading it first.

Edit csproj inside VS 2017

One gripe at this point that I experienced is a warning for inconsistent line endings every time I edit the csproj file.

Inconsistent line endings

Other project differences in VS 2017

As well as the direct differences inside the file, there’s a few initial tooling differences I wanted to highlight between VS 2015 and VS 2017 that I’ve noticed.

The main one I’ve seen so far is the consolidated dependencies folder. Before, we had a “References” folder which included any dependent libraries we added via Nuget and any project references. We then had a “Dependencies” folder which included Bower dependencies. Inside Visual Studio 2017 these have been placed together inside a single “Dependencies” container. Project references have their own sub-container to separate them from the Nuget dependencies.

dependenciesfoldervs2017_1

dependenciesfoldervs2017_2

Initial Impressions

I definitely don’t find the XML as readable as the JSON for reviewing the configuration of my project. This will improve with time as I get used to it, but there’s still too much noise in the file for my liking. It’s certainly a big improvement on the traditional csproj format though and the wildcard include of files is an important enhancement.

Personally, I still much prefer the project.json format and whilst I understand the business case the led Microsoft to their change of course, I feel it’s a shame the MSBuild system couldn’t have moved with the times, rather than dropping back to the xml based project file.

The lack of autocomplete for dependencies bothers me too. This was really handy and I could quickly build up my dependencies without having to open up the Nuget package manager each time. It feels like we’re now being forced down that route which proves to be a bit slower and less efficient.

It’s still very early days and I need to spend more time with the new version of csproj in a final stable version of VS 2017 before passing a final judgement. We also have to remember that the .NET SDK tooling is not quite fully baked yet either. At this early point I’m running into quite a few issues running VS 2017 so it’s hard to fully appreciate the final experience.

The Microsoft statements suggest that in reality the tooling and command line interface should negate the need to spend much, if any, time editing the csproj manually. Once everything is complete we will be in a better place to judge for ourselves. I can’t help but assume things will end up being slower overall and that I’ll end up missing the ease of the project.json file.

Update 23rd Nov 2016

After reviewing https://blogs.msdn.microsoft.com/dotnet/2016/10/19/net-core-tooling-in-visual-studio-15/ I can see that the format for the new csproj that I have experienced in VS 2017 RC doesn’t match the example of what I presume is the end goal. So it’s entirely possible that the structure will evolve even more before release. The example in the MSDN blog does look cleaner and much less cluttered so I will watch this space with interest.

Update 7th March 2017

Further reflection on the changes and the way it has been handled in my post – Visual Studio 2017 is Released – Reflecting on the loss of project.json and looking ahead to the new VS 2017 experience

Loading Pin on Bing Maps from ASP.NET Core MVC Data

As I’ve covered a few times previously in my blog I’m really enjoying working on the allReady project which is run by the Humanitarian Toolbox non-profit organisation. One of the great things about this project from a personal perspective is the chance to learn and develop my skills, whilst also contributing to a good cause.

Recently I picked up an issue which was not in my normal comfort zone, where the requirement was to load a Bing map showing a number of pins relating to request data coming from our MVC view model. I’m not very experienced with JavaScript and tend to avoid it whenever possible, but in this case I did need to use it to take data from my ASP.NET Core model to then populate the Bing maps SDK. As part of the requirement we needed to colour code the pins based on the status of the request.

My starting point was to have a look at the SDK documentation available at http://www.bing.com/api/maps/mapcontrol/isdk. After a bit of reading it looked possible to meet the requirement using the v8 SDK.

The first step we to update our Razor view page to include a div where we wanted to display the map. In our case I had decided to include a full width map at the bottom of the page so my containing div was as follows:

<div id="myMap" style="position:relative;width:100%;height:500px;"></div>

The next step was to include some JavaScript on the page to use the data from our view model to build up the pushpin locations to display on the map. Some of the existing map logic we have on the allReady project is stored inside a site.js file. This means we don’t need to include too much inline code on the page itself In my case the final code was as follows:

@section scripts {
    <script type='text/javascript'
            src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap'
            async defer></script>
    <script type='text/javascript'>
        function GetMap() {
            renderRequestsMap("myMap", getLocationsFromModelRequests());
        };

        function getLocationsFromModelRequests() {
            var requestData = [];
            @foreach (var request in Model.Requests){
                @:var reqData = {lat:@request.Latitude, long:@request.Longitude, name:'@request.Name', color:'blue'};

                if (request.Status == RequestStatus.Completed)
                {
                    @:reqData.color = 'green';
                            }

                if (request.Status == RequestStatus.Canceled)
                {
                    @:reqData.color = 'red';
                }

                @:requestData.push(reqData);
                        }
            return requestData;
        }
    </script>
}

This renders two script blocks inside our master _layout page’s scripts section which is rendered at the bottom of the page body. The first script block simply brings in the Bing map code. The second block builds up the data to pass to our renderRequestsMap function in our site.js code (which we’ll look at later).

The main function here is my getLocationsFromModelRequests code which creates an empty array to hold our requestData. I loop over the requests in our MVC view model and first load the latitude and longitude information into a JavaScript reqData object. We set a name which will be displayed under the pin using the name of the request from our Model. We also set the color on this object to blue which will be our default colour for pending requests when we draw our map.

I then update the color property for our two other possible request statuses. Green for completed requests and red for cancelled requests. With the object now complete we push it into the requestData array. This forms the data object we need to pass into our renderRequestsMap function in our site.js.

The relevant portions of the site.js that result in producing a final map are as follows:

var BingMapKey = "ENTER_YOUR_KEY_HERE";

var renderRequestsMap = function(divIdForMap, requestData) {
    if (requestData) {
        var bingMap = createBingMap(divIdForMap);
        addRequestPins(bingMap, requestData);
    }
}

function createBingMap(divIdForMap) {
    return new Microsoft.Maps.Map(
        document.getElementById(divIdForMap), {
        credentials: BingMapKey
    });
}

function addRequestPins(bingMap, requestData) {
    var locations = [];
    $.each(requestData, function (index, data) {
        var location = new Microsoft.Maps.Location(data.lat, data.long);
        locations.push(location);
        var order = index + 1;
        var pin = new Microsoft.Maps.Pushpin(location, { title: data.name, color: data.color, text: order.toString() });
        bingMap.entities.push(pin);        
    });
    var rect = Microsoft.Maps.LocationRect.fromLocations(locations);
    bingMap.setView({ bounds: rect, padding: 80 });
}

The renderRequestsMap function takes in the id of the containing div for the map and then our requestData array we built up in our Razor view. First it calls a small helper function which creates a bing map object targeting our supplied div id. We then pass the map object and the request data into addRequestPins.

addRequestPins creates an array to hold the location data which we build up by looping over each item in our request data. We create a Microsoft.Maps.Location object using the latitude and longitude and add that to the array (we’ll use this later). We then create a Microsoft.Maps.Pushpin which takes the location object and then a pushpin options object. In the options we define the title for the pin and the color. We also set the text for the pin to a numeric value which increments for each pin we’re adding. That way each pin has a number which corresponds to its position in our list of requests. With all of the pushPin data populated we can push the pin into the map’s entities array.

Once we’ve added all of the pins the final step is to define the view for the map so that it centers on and displays all of the pins we have added. I’ve achieved that here by defining a rectangle using the Microsoft.Maps.LocationRect.fromLocations helper function. We can then call setView on the map object, passing in that rectangle as the bounds value. We also include a padding value to ensure there is a little extra space around the outlying pins.

With these few sections of JavaScript code when we load our page the map is displayed with pins corresponding to the location of our requests. Here is what the final map looks like within our allReady application.

Resulting Bing Map

Debugging into ASP.NET Core Source Quick Tip - How to debug into the ASP.NET source code

Update 26-10-16: Thanks to eagle eyed reader Japawel who has commented below; it’s been pointed out that there is a release 1.0.1 tag that I’d missed when writing this post. Using that negated the two troubleshooting issues I had included at the end of this post. I’ve updated the tag name in this post but left the troubleshooting steps just in case.

As I spend more time with ASP.NET Core, reading the source code to learn about it, one thing I find myself doing quite often is debugging into the ASP.NET Core source. This makes it much easier to step through sections of the code to work out how they function internally. Now that Microsoft have gone open source with .NET Core and ASP.NET Core, the source is readily available on GitHub. In this short post I’m going to describe the steps that allow you to add ASP.NET Core source code to your projects.

When you work with an ASP.NET Core application project you will be adding references to ASP.NET components such as MVC in your project.json file. This will add those packages as dependencies to your project which get pulled down via Nuget. One cool feature of the current solution format is that we can easily provide a path to the full source code and VS will automatically add the relevant projects into your solution. Once this takes place it’s the code in those projects which will get executed, so you can now debug them as you would any other area of code in your application.

In this post we’ll cover bringing in the main MVC Core source into an ASP.NET Core application.

The first step is to go and get the source code from GitHub. Navigate to https://github.com/aspnet/Mvc and use your preferred method to pull down the source. I have GitHub Desktop installed so I click the “Open in Desktop” link and choose somewhere on my computer to clone the source into.

Clone MVC Core via GitHub Desktop

By default the master branch will be checked out, which will contain the most recent code added by the ASP.NET team. To be able to import the actual projects into our application we need to ensure the version of the code matches the version we are targeting in our project.json. At the time of writing, the most recent release version is 1.0.1 for ASPNetCore.Mvc. The second step therefore is to checkout the appropriate matching version. Fortunately this is quite simple as Microsoft tag the release versions in Git. Once the repository is cloned locally, open a terminal window at the location of the source. If we run the “git tag” command we will get a list of all available tags.

E:\Software Development\Projects\AspNet\Mvc [master ≡]> git tag
1.0.0
1.0.0-rc2
1.0.1
6.0.0-alpha2
6.0.0-alpha3
6.0.0-alpha4
6.0.0-beta1
6.0.0-beta2
6.0.0-beta3
6.0.0-beta4
6.0.0-beta5
6.0.0-beta6
6.0.0-beta7
6.0.0-beta8
6.0.0-rc1
rel/1.0.1
E:\Software Development\Projects\AspNet\Mvc [master ≡]>

We can see the tag we need listed, 1.0.1 rel/1.0.1, so the next step is to checkout that code using “git checkout 1.0.1” “git checkout rel/1.0.1”. You will get a warning about moving into a detached HEAD state from Git. This is because we are no longer directly on the end of a branch. This isn’t a problem since we are interested in viewing code from this point in the Git commit history specifically.

Now that we have the source cloned locally and have the correct version checked out we can add the reference to the source into our project. Open up your ASP.NET Core application in Visual Studio. You should see a solution directory called Solution Items with a global.json file in it.

Standard MVC Core solution

The global.json defines some solution level tooling configuration. In a default ASP.NET Core application it should look like this:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-003131"
  }
}

What we will now do is add in the path to the MVC source we cloned down. Update the file by adding in the path to the array of projects. You will need to provide the path to the “src” folder and use double backslashes in your path.

{
  "projects": [ "src", "test", "E:\\Projects\\AspNet\\Mvc\\src" ],
  "sdk": {
    "version": "1.0.0-preview2-003131"
  }
}

Now save the global.json file and Visual Studio will pick up the change and begin a package restore. After a few moments you should see additional projects populate within the solution explorer.

Solution with MVC Core projects

You can now navigate the code in these files and if you want, add breakpoints to debug into the code when your run the your application in debug mode.

Troubleshooting Tips

As mentioned at the start of this post, these troubleshooting steps should no longer be necessary if you are using the rel/1.0.1 tag. In an earlier incorrect version of this post I had referenced pulling down the 1.0.1 tag which required the steps below to get things working.

While the above steps worked for me on one machine I had a couple of issues when following these steps on a fresh device.

Issue 1 – Package restore errors for the MVC solution

Despite MVC 1.0.1 being a released version I was surprised to find that I had trouble with restoring and building the MVC projects due to missing dependencies. When comparing my two computers I found that on one I had an additional source configured which seemed to be solving the problem for me on that device.

The error I was seeing was on various packages but the most common was an dependency on Microsoft.Extensions.PropertyHelper.Sources 1.0.0. The exact error in my case was NU1001 The dependency Microsoft.Extensions.PropertyHelper.Sources >= 1.0.0-* could not be resolved.

To setup your machine with the source you can open the MVC solution and navigate to the NuGet Package Manager > Package Sources options. This can be found under the Tools > Nuget Package Manager > Package Manager Settings menu.

Add a new package source to the MyGet feed which contains the required libraries. In this case https://dotnet.myget.org/F/aspnetcore-master/api/v3/index.json worked for me.

Package Manager with MyGet feed

Issue 2 – Metadata file could not be found

I hadn’t had this issue in the past when bringing ASP.NET Core Identity source into a project but with the MVC code I was unable to build my application once adding in the source for MVC. I was getting the following error for each project included in my solution.

C:\Projects\WebApplication6\src\WebApplication6\error CS0006: Metadata file ‘C:\Projects\AspNetCore\Mvc\src\Microsoft.AspNetCore.Mvc\bin\Debug\netstandard1.6\Microsoft.AspNetCore.Mvc.dll’ could not be found

The reason for the issue is that the MVC projects are configured to build to the artifacts folder which sits next to the solution file. Our project is looking for them under the bin folder for each of the projects. This is a bit of a pain and the simplest solution I could find was to manually modify the output path for each of the projects being included into my solution.

Open up each .xproj file and modify the following line from

<OutputPath Condition="'$(OutputPath)'=='' ">..\..\artifacts\bin\</OutputPath>

to

<OutputPath Condition="'$(OutputPath)'=='' ">.\bin\</OutputPath>

This will ensure that when built the dll’s will be placed in the expected path that our solution is looking for them in.

Depending on what you’re importing this can mean changing quite a few files, 14 in the case of the MVC projects. If anyone knows a cleaner way to solve this problem, please let me know!

ASP.NET MVC Core: HTML Encoding a JSON Request Body How to HTML encode deserialized JSON content from a request body

On a recent REST API project built using ASP.NET Core 1.0 we wanted to add some extra security around the inputs we were accepting. Specifically around JSON data being sent in the body of POST requests. The requirement was to ensure that we HTML encode any of the deserialized properties to prevent an API client sending in HTML and script tags which would then be stored in the database. Whilst we HTML escape all JSON we output as standard, we wanted to limit this extra security vector since we never expect to accept HTML data.

Initially I assumed this would be something simple we can configure but the actual solution proved a little more fiddly than I first expected. A lot of Googling did not yield many examples of similar requirements. The final code, may not be the best way to achieve the goal but seems to work and is the best we could come up with. Feel free to send any improved ideas through!

Defining the Requirement

As I said above; we wanted to ensure that any strings that JSON.NET deserializes from our request body into our model classes do not get bound with any un-encoded HTML or script tags in the values. The requirement was to prevent this by default and enable it globally so that other developers do not have to explicitly remember to set attributes, apply any code on the model or write any validators to encode the data. Any manual steps or rules like that can easily get forgotten so we wanted a locked down by default approach. The expected outcome was that the values from the properties on any deserialized models is sanitised and HTML encoded as soon as we get access to objects in the controllers.

For example the following JSON body should result in the title property being encoded once bound onto our model.

{
	"Title" : "<script>Something Nasty</script>",
	"Description" : "A long description.",
	"Reference": "REF12345"
}

The resulting title once deserialized and bound should be “&lt;script&gt;Something Nasty&lt;/script&gt;”.

Here’s an example of a Controller and input model that might be bound up to such data which I’ll use during this blog post.

public class Thing
{
	public string Title { get; set; }
	public string Description { get; set; }
	public string Reference { get; set; }
}

[HttpPost("CreateThing")]
public IActionResult CreateThing([FromBody] Thing theThing)
{
	// Save the thing to the database here
}

In the case of the default binding and JSON deserialization if we debug and break within the CreateThing method the Title property will have a value of “<script>Something Nasty</script>”. If we save this directly into our database we risk someone later consuming this and potentially rendering it out directly into a browser. While the risk is pretty edge case we wanted to cover it off.

Defining a Custom JSON ContractResolver

The first stage in the solution was to define a custom JSON.net contract resolver. This would allow us to override the CreateProperties method and apply HTML encoding to any string properties.

A simplified example of the final contract resolver looks like this:

using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Text.Encodings.Web;

namespace JsonEncodingExample
{    
    public class HtmlEncodeContractResolver : DefaultContractResolver
    {
        protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
        {
            var properties = base.CreateProperties(type, memberSerialization);

            foreach (var property in properties.Where(p => p.PropertyType == typeof(string)))
            {
                var propertyInfo = type.GetProperty(property.UnderlyingName);
                if (propertyInfo != null)
                {
                    property.ValueProvider = new HtmlEncodingValueProvider(propertyInfo);
                }
            }

            return properties;
        }

        protected class HtmlEncodingValueProvider : IValueProvider
        {
            private readonly PropertyInfo _targetProperty;

            public HtmlEncodingValueProvider(PropertyInfo targetProperty)
            {
                this._targetProperty = targetProperty;
            }
            
            public void SetValue(object target, object value)
            {
                var s = value as string;
                if (s != null)
                {
                    var encodedString = HtmlEncoder.Default.Encode(s);
                    _targetProperty.SetValue(target, encodedString);
                }
                else
                {
                    // Shouldn't get here as we checked for string properties before setting this value provider
                    _targetProperty.SetValue(target, value);
                }
            }

            public object GetValue(object target)
            {
                return _targetProperty.GetValue(target);
            }
        }
    }
}

Stepping through what this does – On the override of CreateProperties first we call CreateProperties on the base DefaultContractResolver which returns a list of the JsonProperties. Without going too deep into the internals of JSON.net this essentially checks all of the serializable members on the object being deserialized/serialized and adds JsonProperies for them to a list.

We then iterate over the properties ourselves, looking for any where the type is a string. We use reflection to get the PropertyInfo for the property and then set a custom IValueProvider for it.

Our HtmlEncodingValueProvider implements IValueProvider and allows us to manipulate the value before it is set or retrieved. In this example we use the default HtmlEncoder to encode the value when it is set.

Wiring Up The Contract Resolver

With the above in place we need to set things up so that when our JSON request body is deserialized it is done using our HtmlEncodeContractResolver instead of the default one. This is where things get a little complicated and while the approach below works, I appreciate there may be a better/easier way to do this.

In our case we specifically wanted to only use the HtmlEncodeContractResolver for deserialization of any JSON from a request body, and not be used for any JSON deserialization that occurs elsewhere in the application. The way I found that we could do this was to replace the JsonInputFormatter which MVC uses to handle incoming JSON with a version using our new HtmlEncodeContractResolver. The way we can do this is on the MvcOptions that is accessible when adding the MVC service in our ConfigureServices method.

Rather than heap all of the code into startup class, we chose to create a small extension method for the MvcOptions class which would wrap the logic we needed. This is how our extension class ended up.

public static class MvcOptionsExtensions
{
    public static void UseHtmlEncodeJsonInputFormatter(this MvcOptions opts, ILogger<MvcOptions> logger, ObjectPoolProvider objectPoolProvider)
    {
        opts.InputFormatters.RemoveType<JsonInputFormatter>();

        var serializerSettings = new JsonSerializerSettings
        {
            ContractResolver = new HtmlEncodeContractResolver()
        };

        var jsonInputFormatter = new JsonInputFormatter(logger, serializerSettings, ArrayPool<char>.Shared, objectPoolProvider);

        opts.InputFormatters.Add(jsonInputFormatter);
    }
}

First we remove the input formatter of type JsonInputFormatter from the available InputFormatters. We then create a new JSON.net serializer settings, with the ContractResolver set to use our HtmlEncodeContractResolver. We can then create a new JsonInputFormatter which requires two parameters in its constructor, an ILogger and an ObjectPoolProvider. Both will be passed in when we use this extension method.

Finally with the new JsonInputFormatter created we can add it to the InputFormatters on the MvcOptions. MVC will then use this when it gets some JSON data on the body and we’ll see that our properties are nicely encoded.

To wire this up in the application we need to use our options from the configure service method as follows…

public void ConfigureServices(IServiceCollection services)
{
    var sp = services.BuildServiceProvider();
    var logger = sp.GetService<ILoggerFactory>();
    var objectPoolProvider = sp.GetService<ObjectPoolProvider>();

    services
        .AddMvc(options =>
            {
                options.UseHtmlEncodeJsonInputFormatter(logger.CreateLogger<MvcOptions>(), objectPoolProvider);
            });
}

As we now have an extension it’s quite easy to call this from the AddMvc options. It does however require those two dependencies. As we are in the ConfigureServices method our DI is not fully wired up. The best approach I could find was to create an intermediate ServiceProvider which will allow us to resolve the types currently registered. With access the ServiceProvider I can ask it for a suitable ILoggerFactory and ObjectPoolProvider.

Update: Andrew Lock had written a followup post which describes a cleaner way to configure these dependencies. I recommend you check it out.

I use the LoggerFactory to create a new ILogger and pass the ObjectPoolProvider in directly.

If we run our application and send in our demo JSON object I defined earlier the title on the resulting Thing class will be “&lt;script&gt;Something Nasty&lt;/script&gt;”.

Summary

Getting to the above code took a little trial and error since I couldn’t find any official documentation suggesting approaches for our requirement on the ASP.NET core docs. I suspect in reality our requirement is fairly rare, but for a small amount of work it does seem reasonable to sanitise JSON input to encode HTML when you never expect to receive it. By escaping everything on the way out the chances of our API returning un-encoded, un-escaped data were very low indeed. But we don’t know what another consumer of the data may do in the future. By storing the data HTML encoded in the database we have hopefully avoided that risk.

If there is a better way to modify the JsonInputFormatter and wire it up, I’d love to know. We took the best approach we could find at the time but accept there could be intended methods or flows we could have used instead. Hopefully this was useful and even if your requirement differs, perhaps you will have requirements where overriding an input or output formatter might be a solution.

GitHub Contributor Tips and Tricks (Gitiquette) My thoughts of being a better GitHub citizen

I made by first GitHub pull request back in November 2015. After a code review I needed to make some changes to the code and the learning curve of Git/GitHub bit me. I had to close my PR and open a fresh one in order to get my code accepted since I couldn’t figure out rebasing. Now a year on, I have had over 100 pull requests accepted on GitHub and have even started helping with code reviews on the allReady project. The journey over the last year started with a steep curve as I got to grips with GitHub and to some extent Git itself.

Back in February in my post about contributing to allReady I briefly covered some rebasing steps useful when contributing on GitHub. In this blog post I wanted to share some further tips and tricks, plus offer my personal views and conventions I follow while attempting to be a good GitHub citizen. I can’t claim to be an expert in GitHub etiquette and I’m still learning as I go but hopefully my experiences can help others new to GitHub to work productively and reduce the pitch of the learning curve a little.

Contributing on GitHub

There is a great post from another allReady contributor and ASP.NET monster Dave Paquette at http://www.davepaquette.com/archive/2016/01/24/Submitting-Your-First-Pull-request.aspx. Dave has covered the main steps and commands needed to work with GitHub and Git in general when contributing to opensource projects. I suggest you read his post for a detailed primer of the commands and GitHub UI.

Git / GitHub Etiquette (Gitiquette)

Rather than repeat advice and steps that Dave has given, in this post I intend to try and share small tips and tricks that I hope allow you to get the best out of opensource and GitHub. These are by no means formal rules and will differ from project to project, so be aware of that when applying them. As an aside; I was quite proud of inventing the term gitiquette before I Googled it and found other people have already used the term!

Raising GitHub Issues

If you find a bug or have an enhancement suggestion for a project, the place to start is to raise an issue. Before doing so it’s good practice to search for any issue (open or closed) that might address a similar point. Managing issues is a large part of running a project on GitHub and it’s good etiquette not to overload the project owners by repeating something that has already been asked and answered. This frees up the owners to respond on new issues.


If you can’t find a similar issue then go ahead and raise one. Try to give it a short descriptive title, summarising the issue or area of the application you are referring to. Try to choose something easy and likely to be searched so that anyone who is checking for related issues can easily find it and avoid repeating a question/bug.


If your issue is more of a question than a bug it can be useful to mark it as such by putting “(question)” or “(discussion)” in the title. Each project may have their own convention around this, so check through some prior issues first and see if there is a preferred pattern in use.


In the content of the issue, try to be as detailed (but also specific) as possible to allow others to understand the bug. Include steps to reproduce the problem if it is a bug. Also try to include screenshots where applicable to share what you’re seeing. If your issue is more of a general question, give enough detail to allow others to understand the reason behind your question. For suggestions, try to back them up with examples of why you believe what you’re suggesting is a good idea to help others make informed arguments for or against.


Keep your language polite and constructive. Remember that a lot of opensource projects are driven by very small teams or individuals around their paid jobs and families. In my opinion, issues are not the right place to complain about things but a place to register problems with a view to getting them solved. “Speak” how you would like to be spoken to.


Once you’ve raised an issue remember that the project leaders may take a while to respond to it, especially if they are a small or one person team. Give them a reasonable period of time to respond and after that time has passed, consider adding a comment as a gentle reminder. Sometimes new issue alerts can be missed so it’s usually reasonable to prompt if the issue goes quiet or has no response.


Each project may have their own process to respond to and to triage issues so try to familiarise yourself with how other issues have been handled to get an idea of what to expect. In some cases issues may be tagged initially and then someone may respond with comments later on.


Once someone responds to your issue, try to answer any follow up questions they have as soon as possible.


If the project owner disagrees with your suggestion or that a bug is actually “by design” then respect their position. If you feel strongly, then politely present your point of view and discuss it openly but once a final decision is made you should respect it. On all of the issues I’ve seen I can’t think of any occasion where things have turned nasty. It’s a great community out there and respect plays a big part in why things function smoothly.


When you start to work on an issue it’s a good idea and helpful to leave a comment on the issue so that others can quickly see that it’s being worked. This avoids the risk of duplicated efforts and wasted time. In some cases the project owners may assign you to the issue as a way of formally tracking assignments.


If you want to work on an issue but are not sure about your potential solution, consider first summarising your thoughts within the issue’s comments and get input from the project owners and other contributors before starting. This can avoid you spending time going down a route not originally intended by the project owners.


If you feel an issue is quite large and might be better broken out into smaller units of work then this can be a reasonable approach. Check with the project how they prefer to handle this but often it’s reasonable to make a sub issue(s) for the work, referencing back to the master original issue so that GitHub provides links between them. Smaller PR’s are often easier to review and merge so it’s often reasonable to break work down like this and quite often preferred.


If someone has said that they are working on an issue but they haven’t updated it or submitted a PR in some time, first check their GitHub fork and see if you can find the work in progress. If it’s being updated and progressing then they are probably still working on the issue. If not, leave a comment and politely check if they are still working on it. It’s not unusual for people to start something with the best intentions and then life comes along and they don’t have time to continue.


If you stop working on an issue part of the way through and may not be able to pick it up for a while, update the issue so others know the status. If you think you will be able to get back to it, then try to give a time frame so that the project owners can determine if someone else might need to pick it up from you. If this is the case you might want to share a link to your fork and branch so others can see what you’ve done so far. Also remember that the project may accept a partial PR as a starting point to resolving the issue. If your code could be committed without breaking the main code base, offer that option.

Git Commits

Next I want to share a few thoughts on best practices to follow when making local commits while working on an issue:

Remember to make a branch from master before starting work on a new issue. Give it a useful name so you can find it again. I personally have ended up using the issue number for most of my branches. It’s a personal choice though, so use what works for you.


It’s not unreasonable to make many work in progress commits locally whilst working on an issue as you progress through it. It can though, be helpful to squash them down before submitting your final PR (see below for a couple of techniques I use to squash my commits).


Your final commit messages that will be included in your PR should be clear and appropriately worded. Try to summarise the commit in the first line and then provide additional detail after a line break and empty line.

GitHub Pull Requests

After you’ve finished working on an issue you’re ready for a pull request. Again I’m sharing my personal thoughts on PRs here and each person and project will differ in style and preference. The main thing is to respect the convention preferred by the project so that they can easily review and accept your work.

It’s a good practice to limit a PR to a single issue. Don’t try to solve too much in one request as it makes reviewing and testing it harder for the project owners. If you spot other bugs along the way then open new issues for them as you go. You can then include the fixes in separate PR(s).


Before submitting your PR I feel it’s good practice to squash or tidy your working commits into logical final commits. This makes reviewing your steps and the content of the PR a bit easier for reviewers. Lots of work in progress or very granular commits can make a PR harder to interpret during code review. Including too many commits will also make the commit history for the project harder to follow once the PR is merged. I aim for 2-3 commits per PR as a general rule (sometimes even a single commit makes sense). Certainly there are exceptions to this rule where it may not make sense to group the commits. See below for advice on how to squash a commit.


I try to separate my work on the actual application code and the unit tests into two separate commits. Personally I feel that it’s then easier for a reviewer to first review the commit with your application code, then to review any tests that you wrote. Instead of viewing the whole set of files changed and wading through it, a reviewer can view each commit independently.


Ensure the commits you do include in your PR are clearly described. This makes understanding what each commit is doing easier during review and once they are merged into the project.


If your PR is a work in progress, mention this on the comment and in the title so that the team know it’s not ready for final merge. Submitting a partial PR can be good if you want to check what you’ve started is in the right direction and to get technical guidance. For example you might use “Fixing Bug in Controller – WIP” as your title. Each project may have a style they would like you to apply so check past PRs and see if any pattern already exists. Once you’re ready for it to be formally reviewed and merged you can update the title and leave a new comment for the project owners.


When submitting a PR, try to describe clearly what you’ve added or changed so that before even reviewing the code the reviewer has an idea of what you’re addressing and how. The PR description should contain enough detail for a reviewer to understand what areas to test and if your work includes UI elements, consider including screenshot(s) to illustrate the changes.


Include a line in your comment stating Fixes #101 (where 101 is the issue number). This will link the PR to the issue to ensure the issue is closed when the PR is accepted. If your PR is a partial fix and should not close the issue, use something like Relates to #101 instead so that the master issue is not automatically closed.


Depending on the PR and project it may be good practice or even required to include unit tests to prove your changes are working.


Before submitting your PR it’s a good idea to ensure your code is rebased from a current version of the master branch. This ensures that your code will merge easily and allows you to test it against the latest master codebase. In my earlier post I included steps explaining how to rebase. This is only necessary if the master branch has had commits added since you branched from it.


Once you receive feedback on your PR don’t take any suggestions or critique personally. It can be hard receiving negative feedback, especially in an open forum such as GitHub but remember that the reviewer is trying to protect the quality and standards of their project. By all means share your point of view if you have reason to disagree with comments, but do so with the above in mind. In my experience feedback has always been valuable and valid.


Once you have received feedback, try to apply any changes in a timely fashion while the review is fresh. You can simply make relevant changes in your local branch, make a new commit and push it to your origin. The PR will update to include the pushed changes. It’s better not to rebase and merge the commit with the original commits so that a reviewer can quickly see the changes. They may ask you to rebase/squash it down after review and before final acceptance to keep the project history clean.

Reviewing GitHub Pull Requests

I’ve started helping to review some of the allReady pull requests in recent months. Here are a few things I try to keep in mind:

Try to include specific feedback using GitHub’s line comment feature. You can click the small plus sign new the line of code you are commenting on so the review is easy to follow. The recent review changes that GitHub have added make reviewing code even more clear.


Leave a general summary explaining any main points you have or questions about the code.


Think of the contributors feelings and remember to keep the comments constructive. Give explanations for why you are proposing any changes and be ready to accept that your opinion may not always be the most valid.


Try to review pull requests as soon as possible while the code is still fresh in the contributor’s mind. It’s harder to come back to an old PR and remember what you did and why. It’s also nice to give feedback promptly when someone has taken the time to contribute.


Remember to thank people for their contributions. People are generously giving their time to contribute code and this shouldn’t be forgotten.

Git Command Reference

That’s the end of my long list of advice. I hope some of it was valuable and useful. Before I close I wanted to share some steps to handle a couple of the specific Git related tasks that I mention in some of the point above.

How to Squash Commits

I wrote earlier about squashing your commits – reducing a large number of small commits into a single or set of combined commits. There are two main techniques I use when doing this which I’ll now cover. I normally do some of these operations in a visual tool such as SourceTree but for the sake of this post I’ll cover the actual Git commands.

Squashing to a single commit

In cases where you have two or more commits in your branch that you want to squash into a single final commit I find the quickest and safest approach is to do a soft reset back to the commit you branched from on master.

Here’s an example set of three commits on a new branch where I’m fixing an issue…

C:\Projects\HTBox-AllReady>git log --pretty=format:"%h - %an, %ar : %s" -3

be7fdfe - stevejgordon, 2 minutes ago : Final cleanup
2e36425 - stevejgordon, 2 minutes ago : Ooops, fixing typo!
85148d7 - stevejgordon, 2 minutes ago : Adding a new feature

All three commits combined result in a fix, but two of these are really just fixing up and tidying my work. As I’m working locally I want to squash these into a single commit before I submit a PR.

First I perform a soft reset using…

C:\Projects\HTBox-AllReady>git reset --soft HEAD~3

This removes the last three commits, the number after the ~ symbol controlling how many commits to reset. The –soft switch tells git to leave all of the changed files intact (we don’t want to lose the work we did!) It also leave the files marked as changes to be committed. I can then perform a new commit which incorporates all of the changes.

C:\Projects\Other\HTBox-AllReady>git commit -a -m "Adding a new feature"
[#123 2159822] Adding a new feature
1 file changed, 1 insertion(+), 1 deletion(-)

C:\Projects\Other\HTBox-AllReady>git status
On branch #123
nothing to commit, working directory clean

C:\Projects\Other\HTBox-AllReady>git log --pretty=format:"%h - %an, %ar : %s" -1

2159822 - stevejgordon, 20 seconds ago : Adding a new feature

We now have a single commit which includes all of our work. Viewing it visually in source tree we can see this single commit more clearly.

Example of branch after a Git squash

We can now push this and create a clean PR.

Interactive Rebasing

In cases where you have commits you want to combine but you want also to control which commits are squashed together we have to do something a bit more advanced. In this case we can use interactive rebasing as a very powerful tool to rewrite our commit history. Here’s an example of 4 commits in a new branch making up a fix for an issue. In this case I’d like to end up with two final commits, one for the main code and one for the unit tests.

C:\Projects\Other\HTBox-AllReady>git log --pretty=format:"%h - %an, %ar : %s" -4

fb33aaf - stevejgordon, 6 seconds ago : Adding a missed unit test
8704b06 - stevejgordon, 14 seconds ago : Adding unit tests
01f0876 - stevejgordon, 42 seconds ago : Oops, fixing a typo
2159822 - stevejgordon, 7 minutes ago : Adding a new feature

Use the following command to start an interactive rebase

git rebase -i HEAD~4

In this case I’m using ~4 to say that I want to include the last 4 commits in my rebasing. The result of this command opens the following in an editor (whatever you have git configured to use)

pick 2159822 Adding a new feature
pick 01f0876 Oops, fixing a typo
pick 8704b06 Adding unit tests
pick fb33aaf Adding a missed unit test

# Rebase 6af351a..fb33aaf onto 6af351a (4 command(s))
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

In our case we want to squash the 2nd commit into the first and the 4th into the 3rd so we update the commit lines at the of the file to…

pick 2159822 Adding a new feature
squash 01f0876 Oops, fixing a typo
pick 8704b06 Adding unit tests
squash fb33aaf Adding a missed unit test

Saving and closing the file results in the next step

# This is a combination of 2 commits.
# The first commit's message is:

Adding a new feature

# This is the 2nd commit message:

Oops, fixing a typo

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date: Fri Sep 30 08:10:19 2016 +0100
#
# interactive rebase in progress; onto 6af351a
# Last commands done (2 commands done):
# pick 2159822 Adding a new feature
# squash 01f0876 Oops, fixing a typo
# Next commands to do (2 remaining commands):
# pick 8704b06 Adding unit tests
# squash fb33aaf Adding a missed unit test
# You are currently editing a commit while rebasing branch '#123' on '6af351a'.
#
# Changes to be committed:
# modified: AllReadyApp/Web-App/AllReady/Controllers/AccountController.cs
#

This gives us the chance to reword the final commit message of the combined 1st and 2nd commits. I will use the # to comment out the typo message since I just want this commit to read as “Adding a new feature”. If you wanted to include both messages then you don’t need to make any changes. They will both be included on separate lines in your commit message.

# This is a combination of 2 commits.
# The first commit's message is:

Adding a new feature

# This is the 2nd commit message:

#Oops, fixing a typo

I then can do the same for the other squashed commit

# This is a combination of 2 commits.
# The first commit's message is:

Adding unit tests

# This is the 2nd commit message:

#Adding a missed unit test

Saving and closing this final file I see the following output in my command prompt.

C:\Projects\Other\HTBox-AllReady>git rebase -i HEAD~4
[detached HEAD a2e69d6] Adding a new feature
Date: Fri Sep 30 08:10:19 2016 +0100
1 file changed, 2 insertions(+), 2 deletions(-)
[detached HEAD a9a849c] Adding unit tests
Date: Fri Sep 30 08:16:45 2016 +0100
1 file changed, 2 insertions(+), 2 deletions(-)
Successfully rebased and updated refs/heads/#123.

When we view the visual git history in SourceTree we can now see we have only two final commits. This looks much better and is ready to push up.

Git branch after interactive rebase

When interactive rebasing you can even reorder the commits so you have full control to craft a final set of commits that represent the public commit history you want to include.

How to pull a PR locally for review

When starting to help review PRs I needed to learn how to pull down code from a pull request in roder to run it locally and test the changes. In the end it wasn’t too complicated. The basic structure of the command we can use looks like…

git fetch origin pull/ID/head:BRANCHNAME

Where origin is the name of the remote where the PR has been made. ID is the pull request id and branchname is the name of the new branch that you want to create locally. On allReady for example I’d use something like…

git fetch htbox pull/123/head:PR123

This pulls PR #123 from the htbox remote (this is how I’ve named the remote on my system which points to the main htbox/allready repo) into a local branch called PR123. I can then checkout the branch using git checkout PR123 to test and review the code.

In Summary

Hopefully some of these tips are useful. In a lot of cases I’m sure they are pretty much common sense. A lot of it is also personal preference and may differ from project to project. Hopefully if you’re just starting out as a GitHub contributor you can put some of them to use. GitHub for the most part is a great community and is a great place to learn and develop your skills.

I’d love to be able to acknowledge the numerous fantastic blog posts, and Stack Overflow threads that helped me along my way; unfortunately I didn’t keep notes of the sources as I learned things on my 1 year journey. A big thanks none-the-less to anyone who has contributed guidance on Git and GitHub. Every little bit of shared knowledge is a big help to developers who are learning the ropes. It truly is a journey as your contribute more and more on GitHub it gets easier! Happy GitHub-ing!