Developers working hard during the code-a-thon

Running a Humanitarian Toolbox Code-A-Thon

I’ve been involved with the Humanitarian Toolbox allReady project for about one year now. I’ve really enjoyed being able to contribute to the cause, while learning plenty along the way. Contributing has made me a better programmer and exposed me to new libraries and techniques which I am already benefiting from in my day job and in other projects.

During my time as a contributor I’ve be aware of the code-a-thons which have taken place in the US, Canada and more recently Europe. They’ve always seemed like great events and I’m excited to be signed up for the 2-day code-a-thon in London next month. A few months ago I discussed my experience with allReady and my wiliness to attend the London event with the management at my employer, Madgex Ltd. They were very receptive and were keen to support the cause. In fact, they are now funding 4 developers (including myself) to attend the full two days in London. Madgex are covering the transportation costs and hotel accommodation as well as loosing 4 developers for 2 days so this is a generous contribution.

In addition to the NDC event, we had discussions about arranging a code-a-thon at our office in Brighton, one evening after work. I was really keen to get something planned in and allow other developers to be able to contribute and experience allReady. With the concept green lighted by senior management, we set about making the idea a reality.

Planning

The starting point was to arrange a 30 minute presentation about Humanitarian Toolbox and allReady to gauge interest and demonstrate the application. I prepared a set of slides and a short demo which I presented in our board room one lunchtime. We had a good turnout of about 16 people in the room and it planted the seed with a few of the attendees, who talked to me afterwards about getting involved.

With the awareness raised, I followed up with some emails to the Madgex staff to further determine if people would be willing to join an event locally. I got some positive indications from enough people, so we picked a date and the invites went out to the staff. We planned our initial event for one evening after work. We agreed that 3 hours would be most practical given that people had already done a full day of work. Over the next few weeks I continued to promote the idea and started to confirm attendees for the evening. We had 7 or 8 people showing good interest as we closed in on the planned date. This was all supported by our developer lead at Madgex, Steve (great name!)

1 week to go

With a week to go we started to ramp up our planning activities. Steve (developer lead), helped with the physical space we needed and secured budget for some pizzas on the night (always a great lure for developers!).

I started combing through issues on GitHub that would be suitable for new contributors. We have a lot of work on-going with the project, pushing towards our v1 release but many of the issues that are left are quite complex and require a reasonable knowledge of the codebase. The balance was finding interesting and diverse issues, whilst ensuring that they wouldn’t require too much time up-front having to learn about the entire application. I had decided to try the new project feature inside GitHub to plan and organize the event. I created a few statuses and dropped issues into the “Not Started” category. The project feature is a basic Kanban board which allows cards to be dragged between statuses as the work progresses.

I also put together a pre-requisites list for the confirmed and tentative attendees, guiding them to ensure they had the latest tooling for .NET core installed, had forked and cloned the repository and were able to get it building. This is an important activity since it’s a little time consuming and we didn’t want to spend most of the time at the code-a-thon performing the setup work. As the big day approached we started to firm up numbers so we knew what cabling infrastructure we would need.

On the day

On the morning of the event, I did a final round up of the staff to confirm final attendees. After completing this we had about 6 locally able to attend as well as one of our developers from our North American office in Toronto, Canada. I started collecting GitHub usernames so that we could add people as contributors on the repository. This isn’t absolutely required, but aids in assigning issues to specific people. We also arranged to get people added to the Humanitarian Toolbox Slack channel so they could ask questions from our project experts and the Humanitarian Toolbox founders during the event.

At 4:30pm (with 30 mins to go) I moved into the meeting room we would be using for the evening. I wanted to get the webcam setup and ensure we had the cables for power and networking. Our fantastic systems technician Ricky had beaten me to it and already sorted the cabling requirements. We tested out the remote Skype link to Luke in Canada and made sure we had everything ready. A big thanks to Ricky for volunteering his time to provide some tech support and make sure we got up an running so smoothly.

Humanitarian Toolbox Code-A-Thon Sign
You must have a nice sign when running a code-a-thon!

At 5pm our team of developers migrated like birds in the winter, to the meeting room. We had 5 developers joining me locally, plus Luke video conferencing with us from across the pond. We’d decided to relocate our desktop PCs into this common area so that it would be easier to support each other during the event. While this required a little time up-front to disconnect and reconnect the PCs, it proved a good move as I was able to answer questions, share information and demo things very easily for everyone.

Preparing for the event
Developers getting setup for the event

By about 5:20pm we were in good shape, the computers were moved, developers were settled, pizza orders had been taken (priorities people). We started with a brief Google Hangouts standup with the Humanitarian Toolbox team. James and Tony introduced the project, it’s goals and thanked everyone for taking the time at the end of their working day to stay on and code for the greater good. We also had one of the most regular project contributors Mike on the call, showing his support for the event. Being able to speak to the founders and project team made for a great start and really set the tone for a fun evening, supporting code to save lives.

It’s worth pointing out here a couple of things that James and Tony highlighted during the standup. Firstly, the initial use case for the application will be to aid the American Red Cross with the effort to install free smoke alarms in people’s homes. Already this initiative has helped save lives when disaster struck and a family’s home caught fire. Thanks to a smoke alarm installed by the Red Cross, the family were alerted to the fire and all able to evacuate safely. Also important to note is that for each hour of coding time spent on the application we can expect about 40 hours of volunteer time saved. That’s a huge return and really shows that even sparing a few hours of personal time can have a massive impact.

With the project introduced and the devs eager to get going we started the team off by finding issues people were keen to work on. Sarah, Roberto and Luke took on some unit tests, Patrick picked up a new feature requirement, while Chris dove in with some EF and migrations code for the first time. Mark one of our front end developers started looking at the homepage UI improvements. I floated around the room to answer questions, demonstrate the site functionality and to help with the GitHub flow. I really enjoyed watching new contributors getting up to speed with the code and being able to assist their learning as they progressed. It was good to be able to witness some of the common questions new contributors have so we can focus on lowering the barrier to entry in the future.

Developers working hard during the code-a-thon
Developers working hard during the code-a-thon

There were a lot of new things for everyone to learn and they did a fantastic job of absorbing the information and becoming productive quickly. All of the developers were new to GitHub and OpenSource, so there was learning to be done around the processes to ensure an up-to-date master branch, to manage rebasing and prepare pull requests. Most of the team were also experiencing ASP.NET Core for the first time, which in itself has a lot of new concepts to learn. It was also the first exposure to Entity Framework for everyone, so that had it’s own learning curve too. This really highlights another benefit of contributing to the project for developers. It’s a great learning experience that puts a real-world, production-ready code base in the hands of developers.

During the 3 hours everyone knuckled down and other than a brief break to load up our plates with some pizza, we worked solidly. I had hoped to use the GitHub project feature to keep track of things, but we hit some issues with people not being able to move the cards themselves. While I was able to do this, it proved more of a hindrance as my time was better spent helping people around the room. In the end we abandoned that feature and in hindsight a post-it-note board might have been easier to manage. I still like the concept of using GitHub so that others not physically at the event can monitor progress, but we need to find a way to allow contributors to manage the cards as they work on issues. I suspect it might just be a permissions thing, so I’ll investigate it soon.

By the end of the evening we had submitted two pull requests which were reviewed and merged into the project before we left. We also had three other issues very close to being completed which will be finished off in the coming days and hopefully submitted soon. Given the setup time, huge learning curve and relatively short coding period of three hours, I’m very pleased with this achievement. Everyone was amazed when we realised that we had hit the 8pm finish already. Time flew, which is a great sign and from feedback so far, people would have liked to have even longer with the code. Perhaps an all day event ison the cards for the future.

I really hope that everyone left feeling as positive and happy as I did. Certainly the sense I got was that everyone enjoyed learning some new things, getting to grips with the code and contributing to a good cause. I feel proud to be part of such a generous team of people who were able to join this code-a-thon after a full day of development at work first. Everyone should be very proud of what they managed to contribute. I’d love to run another session to continue the great start we’ve made and if people are willing, perhaps we can make it a regular thing or even look at a longer full day event.

From a personal perspective I enjoyed sharing what I’d learned during my time with the project and seeing other developers pick up the concepts for themselves. Prior to this, I’ve never been too keen on the idea of being a “teacher” and even presenting is not in my normal comfort zone, but I found a bit of a passion for instructing people. I’m a firm believer that by sharing information, it helps our own understanding as you are challenged to know enough to be able to articulate the concepts.

Code-a-Thon Team Photo
Code-a-Thon Team Photo (Sarah had escaped just prior to this being taken!)

Feedback

The feedback the day after has been extremely positive. I’m very happy to hear that people enjoyed themselves and had a positive and fun experience. It’s nice to spend time coding for fun, outside of the normal day-to-day work. Being able to put your skills to use towards such a positive concept is also very rewarding. From a quick survey afterwards, the team are keen to continue to contribute to the project and would like to take part in another code-a-thon in the future.

Thanks again to everyone who took part:

Our developers: Sarah, Patrick, Chris, Mark, Roberto and Luke
Tech support: Ricky
Planning and management: Steve K
Humanitarian Toolbox Support: James, Tony and Mike

Custom ModelBinding in ASP.NET MVC Core How to HTML Encode Strings

In a previous post I showed how we could automatically HTML encode data when deserialising it from a JSON request body. In my case this was to meet some specific security requirements we had for an ASP.NET Core API we were building. This time around I will discuss a similar requirement which stated that we should also ensure we HTML encode any strings bound from the route, querystring and form data. We’ll do this by creating a custom ModelBinder.

Much like our earlier requirement, we needed to ensure that we are not storing un-encoded data containing HTML or script tags in our database. While our application escapes the data on the way out, we want to prevent anyone accidentally rendering un-escaped HTML in future applications. We have no requirement to accept HTML code on our API so we decided to ensure each value was encoded by default during binding. As before we wanted to enable this globally so that no developer would have to take specific steps to enable this per controller or action.

Model Binding Flow

Before I jump into the solution, I’ll firstly explain at a high level how the model binding flow works in ASP.NET Core MVC. To understand this better I ended up following steps from this blog post where I discuss how we can add the MVC source solution to our code, allowing us to debug into it. With this in place I was able to step through the model binding code to watch and understand what was happening.

Model binding in principle is quite straight forward. It attempts to match up values coming in on the request to any properties expected by the parameters of the controller and action. Each value is run through the model binding flow which looks to find a suitable binder to handle the value. To find a suitable binder, ASP.NET MVC Core uses binder providers. These providers are registered when MVC is initialised and by default includes 14 different providers. Each of these implements the IModelBinderProvider interface.

There is a provider to handle key value pairs for example, another for complex types and another for simple types. These binders are registered in a specific order and MVC checks each provider in that order during the binding process until it find the first provider which can provide a suitable binder for the object being bound. Each binding provider will have some conditions that check the binding provider context. Once these conditions are met, the provider will return a binder that can handle the binding.

Once we have a model binder available it’s BindModelAsync method is called. This method expects to handle the value of the object being bound and returns a ModelBindingResult. If the binding succeeds as expected then a Success is returned, including the final value to be bound to a property. I hope to spend more time exploring the binding process in a future post. It’s a little beyond the scope here to explain the deeper details of how everything is hooked up. For now, let’s look at how we meet the requirement above to url encode values during the binding process.

Let’s assume we have the following action.

We want to ensure that the newTitle property is HTML encoded by the time we can access it within the action.

Creating a ModelBinder

The starting point for our solution was to create a model binder based on the IModelBinder interface.

We initialize the binder passing in an IModelBinder as a fallback. In my case I am specifically expect to handle string values but I don’t want to worry about the cases for handling null or empty strings. That is already covered in the default SimpleTypeModelBinder provided in MVC. Therefore I chose to pass in a binder which I will pass the responsibility onto in those cases.

Within the BindModelAsync method we first call the the ValueProvider.GetValue method on the value provider in the bindingContext. We pass in the model name which returns a valueProviderResult. As long as there is a value in the result we first check if it’s null or empty. If so, this is where we call the fallback binder’s BindModelAsync method. If we do have a suitable value then we proceed to HTMLEncode it before creating a success ModelBindingResult. Finally we return a completed task using the internal helper TaskCache.CompletedTask.

Creating a ModelBinderProvider

Now that we have a model binder defined we need to create a provider which will determine if the modelbinder is suited to the object being bound. Here is the code…

This provider is pretty simple. We must implement the GetBinder method on the IModelBinderProvider interface. We use the MetaData on the ModelBinderProviderContext to determine if it can provide a suitable binder. This meta data includes some helper properties such as the IsComplexType flag to help us determine if we can provide binding for the object. In this case we are looking only for strings. If the object being bound is a string, then we can return our custom HtmlEncodeModelBinder. Otherwise we return null. When we return null the next binding provider will be given the chance to provide a binder. This continues until a suitable binder is found. You’ll notice that we pass in a new SimpleTypeModelBinder which will act as our fallback for any string null or empty cases we encounter during binding.

Now that we have a binder and a provider, the final step is to add this provider to the list of binding providers. Since these are executed in a set order we also need to place our provider in the right place. I’ve achieved this using the following extension to the MVC options.

What we are doing here is using LINQ to find the SimpleTypeModelBinderProvider in the list of ModelBinderProviders. Our binder provider need to run just before the simple type binder to ensure that we can have the opportunity to handle the string types with our HTML encoding logic. If we placed it after the SimpleTypeModelBinderProvider we would find that the binding flow never reached our code as the binding would already have been handled. We then get the index of the SimpleTypeModelBinderProvider and using that index we insert our binder provider into the list at that position. Now, when MVC binding occurs, our binder provider will be part of the binding process. If we inspect the ModelBinderProviders during debug it should now look like this:

ASP.NET Core ModelBinders List

You can see our new custom HtmlEncodeModelBinderProvider listed before the SimpleTypeModelBinderProvider.

Finally, with everything complete we can do the final wiring up. We call our options extension when adding MVC within the ConfigureServices method in Startup.cs

Now when the newTitle parameter is bound on our example action it will ensure that any html tags are safely encoded. As with all of my posts, I’ve taken the best approach I could think of to implement this. I welcome any comments and suggestions to improve this code or correct any mistakes!

R.I.P project.json – Out with the new, in with the old An early look at the VS 2017 csproj changes

Okay; the title of the post is a bit tongue in cheek! Since it was first announced that the new project.json format (developed by the ASP.NET Core team) was going to be retired in favour of the more traditional csproj file, the .NET community have had some very strong opinions. These have been shared on Twitter, in blog posts, on GitHub and via any other channels people could find. I will admit that when I heard about the change, I initially had some of the same views and concerns. Why would Microsoft take away my beloved project.json? However, I decided to wait it out and see what Microsoft actually produced before passing judgement.

A little bit of history

Let me take a step back here and try to explain a little bit of the history as I have understood it. In the early beta timeframe for ASP.NET Core (then called ASP.NET 5) the ASP.NET and .NET teams were working independently of one another. The .NET team were working heavily on the API surface for .NET Core, whilst the ASP.NET team were working on the ASP.NET Core application platform on top of the API the .NET team were making. The tooling to work with ASP.NET Core was in an early preview and at the time we had DNVM and DNU commands. These have since been consolidated and morphed to form the dotnet CLI.

In order to support development of ASP.NET projects, the team decided to take the opportunity to develop a brand new project file format. One which would be simpler and address many of the issues the community had experienced when working with csproj based projects. At the time, nothing existed specifically for .NET Core projects and it was felt that since ASP.NET Core represented a new beginning for the platform, it was as good a time as any to make changes. During the betas and release candidates people started to get their hands on the new project.json and xproj based solutions when building their web applications. The response was indeed very positive. project.json provided autocomplete of dependencies, ease of reading and a much simpler overall structure.

Personally, I was very pleased when project.json was born. I could easily understand it and it became a regular reference point to review my dependencies and manage my projects.

However, it was during the release candidate period that decisions started to get made around a wider .NET project format for the other application types. Customers were starting to share their concerns around project.json and the fact that it was not supported within MSBuild. What would this mean for large monolith projects, how would they even begin to migrate them to .NET Core? And, being fair, these were valid concerns. Starting with greenfield ASP.NET Core projects was fantastic. There was no need to concern ourselves with the past and the new new project.json was easy to get to grips with. But, working with an existing project would not necessarily be a simple conversion without some degree of work. So the announcements started to be made that project.json would be retired in favour of a more backwards compatible MSBuild ready, csproj format. It was too late in the day for this work to be completed before the RTM of ASP.NET Core 1.0 but we were warned that it would be worked on for release alongside Visual Studio 2017.

And that brings us to today, with the announcement of the release candidate for Visual Studio 2017 we can now get our hands on projects using the new (or should I say old) csproj project format. I’ve taken an early peek at the file to see what has changed.

Comparing project.json (VS 2015) with csproj (VS 2017 RC)

Microsoft have assured us while working on the change that they expected to port many of the improvements that the project.json file gave developers over to the improved csproj format.

Comparing project.json and the new csproj

In the screenshot above (hard to see due to the size) I’ve opened a default new Web Application project in VS 2015 (left) and VS 2017 (right). Side-by-side the obvious difference in that project.json is JSON and the csproj is XML. You will notice that the line length of the files is not too different. project.json comes in at 65 lines and csproj comes in at 77 lines. Already that is a significant reduction in lines from a traditional csproj file.

Here are the complete files:

The other thing which stands out to me is the readability. Despite the work Microsoft have done to reduce noise in the csproj file, I still find it much harder to scan than the project.json. The XML tags draw my eye away from the detail that I actually want to consume.

At the time of writing I’m still very new to working inside VS 2017 and indeed on my computer it’s proofing quite buggy. For example, a solution that I created inside VS 2017 and which had been working fine (1 web application and 1 test class library) no longer re-opens after I saved and closed it. Suffice to say, I won’t be using Visual Studio 2017 for production work at this time and may even remove it entirely as it seems to be affecting my experience inside VS 2015 too.

Once nice advantage of the csproj file over the project.json is that we can now include references to full (non-core) .NET projects as well. Previously the integration was not available.

Something I do miss however (unless it’s just not working for me) is proper autocomplete of the packages and versions. With the project.json I could start typing the name of a dependency and Visual Studio would show me potential autocomplete options. I could also choose from the available versions. This seems to be absent in the csproj file. Personally I only saw basic intellisense for some, but not all, of the xml tags.

Migrating from project.json

Migration from project.json can be achieved in one of two ways. These easiest is to open the existing xproj/project.json project inside Visual Studio 2017. The IDE will detect the project format and prompt for a migration. The other option is to run the dotnet migrate command directly in the command line. This should result in the same conversion.

Migration from project.json

In a quick test for me, the VS migration seemed to work quite well, although that was a single default web application project with basic dependencies. In theory it should work just as well for larger projects. Perhaps when we come to migrate the allReady project which I contribute to, we’ll see if there’s more to it.

Having done this with the project.json solution created in VS 2015 (as appeared above) we get the following csproj output.

Inside the project.json we have our main dependencies section and inside that we define the package and version. This is pretty clear and easy to read. Inside the csproj each package reference is added inside an ItemGroup element. It’s more verbose than the project.json.

Much of the content is similar and when you compare the files you can generally find where each part has been migrated. However you’ll need to remember the more complex XML schema in order to edit the file manually. Indeed, Microsoft seem to be pushing us away from this in favour of IDE tooling and the command line. I fear though, that those are never going to be as quick to work with as we’ve been used to with a quick edit of the project.json file.

Differences between the original csproj and the new csproj

There are some other notable changes between what we see in a traditional csproj file when compared to the newer csproj file. Firstly they have gotten rid of the use of GUIDS within the file, so that it is much more human readable. That’s a good change and one I’m happy to see.

We are no longer required to include all of the files within the csproj file in order for them to be considered part of the project. project.json gave us a much more favourable, include by default, behaviour and this has now been made available inside csproj. It’s ever so slightly different as we must use a wildcard style include line to tell the project to behave this way. New projects created inside VS 2017 have this by default. When working with teams of developers, the csproj file was a common area of merge conflicts. We should no longer have so many issues since the project file will not change simply because we’ve added new files to the project.

Package references are now integrated into csproj so this single file also includes any libraries our project is dependent on and any projects we reference. It’s nice to have everything in one place as we’ve become used to with the project.json format.

Another important change is that we can now edit the csproj file whilst the project is loaded. This was not previously possible and meant that any manual changes were slow to make as we had to first unload the project. Granted, it was rare that I wanted to manually edit the file in the full .NET framework days. To edit the csproj file we can simply right click on the project without unloading it first.

Edit csproj inside VS 2017

One gripe at this point that I experienced is a warning for inconsistent line endings every time I edit the csproj file.

Inconsistent line endings

Other project differences in VS 2017

As well as the direct differences inside the file, there’s a few initial tooling differences I wanted to highlight between VS 2015 and VS 2017 that I’ve noticed.

The main one I’ve seen so far is the consolidated dependencies folder. Before, we had a “References” folder which included any dependent libraries we added via Nuget and any project references. We then had a “Dependencies” folder which included Bower dependencies. Inside Visual Studio 2017 these have been placed together inside a single “Dependencies” container. Project references have their own sub-container to separate them from the Nuget dependencies.

dependenciesfoldervs2017_1

dependenciesfoldervs2017_2

Initial Impressions

I definitely don’t find the XML as readable as the JSON for reviewing the configuration of my project. This will improve with time as I get used to it, but there’s still too much noise in the file for my liking. It’s certainly a big improvement on the traditional csproj format though and the wildcard include of files is an important enhancement.

Personally, I still much prefer the project.json format and whilst I understand the business case the led Microsoft to their change of course, I feel it’s a shame the MSBuild system couldn’t have moved with the times, rather than dropping back to the xml based project file.

The lack of autocomplete for dependencies bothers me too. This was really handy and I could quickly build up my dependencies without having to open up the Nuget package manager each time. It feels like we’re now being forced down that route which proves to be a bit slower and less efficient.

It’s still very early days and I need to spend more time with the new version of csproj in a final stable version of VS 2017 before passing a final judgement. We also have to remember that the .NET SDK tooling is not quite fully baked yet either. At this early point I’m running into quite a few issues running VS 2017 so it’s hard to fully appreciate the final experience.

The Microsoft statements suggest that in reality the tooling and command line interface should negate the need to spend much, if any, time editing the csproj manually. Once everything is complete we will be in a better place to judge for ourselves. I can’t help but assume things will end up being slower overall and that I’ll end up missing the ease of the project.json file.

Update 23rd Nov 2016

After reviewing https://blogs.msdn.microsoft.com/dotnet/2016/10/19/net-core-tooling-in-visual-studio-15/ I can see that the format for the new csproj that I have experienced in VS 2017 RC doesn’t match the example of what I presume is the end goal. So it’s entirely possible that the structure will evolve even more before release. The example in the MSDN blog does look cleaner and much less cluttered so I will watch this space with interest.

Update 7th March 2017

Further reflection on the changes and the way it has been handled in my post – Visual Studio 2017 is Released – Reflecting on the loss of project.json and looking ahead to the new VS 2017 experience

Loading Pin on Bing Maps from ASP.NET Core MVC Data

As I’ve covered a few times previously in my blog I’m really enjoying working on the allReady project which is run by the Humanitarian Toolbox non-profit organisation. One of the great things about this project from a personal perspective is the chance to learn and develop my skills, whilst also contributing to a good cause.

Recently I picked up an issue which was not in my normal comfort zone, where the requirement was to load a Bing map showing a number of pins relating to request data coming from our MVC view model. I’m not very experienced with JavaScript and tend to avoid it whenever possible, but in this case I did need to use it to take data from my ASP.NET Core model to then populate the Bing maps SDK. As part of the requirement we needed to colour code the pins based on the status of the request.

My starting point was to have a look at the SDK documentation available at http://www.bing.com/api/maps/mapcontrol/isdk. After a bit of reading it looked possible to meet the requirement using the v8 SDK.

The first step we to update our Razor view page to include a div where we wanted to display the map. In our case I had decided to include a full width map at the bottom of the page so my containing div was as follows:

<div id="myMap" style="position:relative;width:100%;height:500px;"></div>

The next step was to include some JavaScript on the page to use the data from our view model to build up the pushpin locations to display on the map. Some of the existing map logic we have on the allReady project is stored inside a site.js file. This means we don’t need to include too much inline code on the page itself In my case the final code was as follows:

@section scripts {
    <script type='text/javascript'
            src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap'
            async defer></script>
    <script type='text/javascript'>
        function GetMap() {
            renderRequestsMap("myMap", getLocationsFromModelRequests());
        };

        function getLocationsFromModelRequests() {
            var requestData = [];
            @foreach (var request in Model.Requests){
                @:var reqData = {lat:@request.Latitude, long:@request.Longitude, name:'@request.Name', color:'blue'};

                if (request.Status == RequestStatus.Completed)
                {
                    @:reqData.color = 'green';
                            }

                if (request.Status == RequestStatus.Canceled)
                {
                    @:reqData.color = 'red';
                }

                @:requestData.push(reqData);
                        }
            return requestData;
        }
    </script>
}

This renders two script blocks inside our master _layout page’s scripts section which is rendered at the bottom of the page body. The first script block simply brings in the Bing map code. The second block builds up the data to pass to our renderRequestsMap function in our site.js code (which we’ll look at later).

The main function here is my getLocationsFromModelRequests code which creates an empty array to hold our requestData. I loop over the requests in our MVC view model and first load the latitude and longitude information into a JavaScript reqData object. We set a name which will be displayed under the pin using the name of the request from our Model. We also set the color on this object to blue which will be our default colour for pending requests when we draw our map.

I then update the color property for our two other possible request statuses. Green for completed requests and red for cancelled requests. With the object now complete we push it into the requestData array. This forms the data object we need to pass into our renderRequestsMap function in our site.js.

The relevant portions of the site.js that result in producing a final map are as follows:

var BingMapKey = "ENTER_YOUR_KEY_HERE";

var renderRequestsMap = function(divIdForMap, requestData) {
    if (requestData) {
        var bingMap = createBingMap(divIdForMap);
        addRequestPins(bingMap, requestData);
    }
}

function createBingMap(divIdForMap) {
    return new Microsoft.Maps.Map(
        document.getElementById(divIdForMap), {
        credentials: BingMapKey
    });
}

function addRequestPins(bingMap, requestData) {
    var locations = [];
    $.each(requestData, function (index, data) {
        var location = new Microsoft.Maps.Location(data.lat, data.long);
        locations.push(location);
        var order = index + 1;
        var pin = new Microsoft.Maps.Pushpin(location, { title: data.name, color: data.color, text: order.toString() });
        bingMap.entities.push(pin);        
    });
    var rect = Microsoft.Maps.LocationRect.fromLocations(locations);
    bingMap.setView({ bounds: rect, padding: 80 });
}

The renderRequestsMap function takes in the id of the containing div for the map and then our requestData array we built up in our Razor view. First it calls a small helper function which creates a bing map object targeting our supplied div id. We then pass the map object and the request data into addRequestPins.

addRequestPins creates an array to hold the location data which we build up by looping over each item in our request data. We create a Microsoft.Maps.Location object using the latitude and longitude and add that to the array (we’ll use this later). We then create a Microsoft.Maps.Pushpin which takes the location object and then a pushpin options object. In the options we define the title for the pin and the color. We also set the text for the pin to a numeric value which increments for each pin we’re adding. That way each pin has a number which corresponds to its position in our list of requests. With all of the pushPin data populated we can push the pin into the map’s entities array.

Once we’ve added all of the pins the final step is to define the view for the map so that it centers on and displays all of the pins we have added. I’ve achieved that here by defining a rectangle using the Microsoft.Maps.LocationRect.fromLocations helper function. We can then call setView on the map object, passing in that rectangle as the bounds value. We also include a padding value to ensure there is a little extra space around the outlying pins.

With these few sections of JavaScript code when we load our page the map is displayed with pins corresponding to the location of our requests. Here is what the final map looks like within our allReady application.

Resulting Bing Map

Debugging into ASP.NET Core Source Quick Tip - How to debug into the ASP.NET source code

Update 26-10-16: Thanks to eagle eyed reader Japawel who has commented below; it’s been pointed out that there is a release 1.0.1 tag that I’d missed when writing this post. Using that negated the two troubleshooting issues I had included at the end of this post. I’ve updated the tag name in this post but left the troubleshooting steps just in case.

As I spend more time with ASP.NET Core, reading the source code to learn about it, one thing I find myself doing quite often is debugging into the ASP.NET Core source. This makes it much easier to step through sections of the code to work out how they function internally. Now that Microsoft have gone open source with .NET Core and ASP.NET Core, the source is readily available on GitHub. In this short post I’m going to describe the steps that allow you to add ASP.NET Core source code to your projects.

When you work with an ASP.NET Core application project you will be adding references to ASP.NET components such as MVC in your project.json file. This will add those packages as dependencies to your project which get pulled down via Nuget. One cool feature of the current solution format is that we can easily provide a path to the full source code and VS will automatically add the relevant projects into your solution. Once this takes place it’s the code in those projects which will get executed, so you can now debug them as you would any other area of code in your application.

In this post we’ll cover bringing in the main MVC Core source into an ASP.NET Core application.

The first step is to go and get the source code from GitHub. Navigate to https://github.com/aspnet/Mvc and use your preferred method to pull down the source. I have GitHub Desktop installed so I click the “Open in Desktop” link and choose somewhere on my computer to clone the source into.

Clone MVC Core via GitHub Desktop

By default the master branch will be checked out, which will contain the most recent code added by the ASP.NET team. To be able to import the actual projects into our application we need to ensure the version of the code matches the version we are targeting in our project.json. At the time of writing, the most recent release version is 1.0.1 for ASPNetCore.Mvc. The second step therefore is to checkout the appropriate matching version. Fortunately this is quite simple as Microsoft tag the release versions in Git. Once the repository is cloned locally, open a terminal window at the location of the source. If we run the “git tag” command we will get a list of all available tags.

E:\Software Development\Projects\AspNet\Mvc [master ≡]> git tag
1.0.0
1.0.0-rc2
1.0.1
6.0.0-alpha2
6.0.0-alpha3
6.0.0-alpha4
6.0.0-beta1
6.0.0-beta2
6.0.0-beta3
6.0.0-beta4
6.0.0-beta5
6.0.0-beta6
6.0.0-beta7
6.0.0-beta8
6.0.0-rc1
rel/1.0.1
E:\Software Development\Projects\AspNet\Mvc [master ≡]>

We can see the tag we need listed, 1.0.1 rel/1.0.1, so the next step is to checkout that code using “git checkout 1.0.1” “git checkout rel/1.0.1”. You will get a warning about moving into a detached HEAD state from Git. This is because we are no longer directly on the end of a branch. This isn’t a problem since we are interested in viewing code from this point in the Git commit history specifically.

Now that we have the source cloned locally and have the correct version checked out we can add the reference to the source into our project. Open up your ASP.NET Core application in Visual Studio. You should see a solution directory called Solution Items with a global.json file in it.

Standard MVC Core solution

The global.json defines some solution level tooling configuration. In a default ASP.NET Core application it should look like this:

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-003131"
  }
}

What we will now do is add in the path to the MVC source we cloned down. Update the file by adding in the path to the array of projects. You will need to provide the path to the “src” folder and use double backslashes in your path.

{
  "projects": [ "src", "test", "E:\\Projects\\AspNet\\Mvc\\src" ],
  "sdk": {
    "version": "1.0.0-preview2-003131"
  }
}

Now save the global.json file and Visual Studio will pick up the change and begin a package restore. After a few moments you should see additional projects populate within the solution explorer.

Solution with MVC Core projects

You can now navigate the code in these files and if you want, add breakpoints to debug into the code when your run the your application in debug mode.

Troubleshooting Tips

As mentioned at the start of this post, these troubleshooting steps should no longer be necessary if you are using the rel/1.0.1 tag. In an earlier incorrect version of this post I had referenced pulling down the 1.0.1 tag which required the steps below to get things working.

While the above steps worked for me on one machine I had a couple of issues when following these steps on a fresh device.

Issue 1 – Package restore errors for the MVC solution

Despite MVC 1.0.1 being a released version I was surprised to find that I had trouble with restoring and building the MVC projects due to missing dependencies. When comparing my two computers I found that on one I had an additional source configured which seemed to be solving the problem for me on that device.

The error I was seeing was on various packages but the most common was an dependency on Microsoft.Extensions.PropertyHelper.Sources 1.0.0. The exact error in my case was NU1001 The dependency Microsoft.Extensions.PropertyHelper.Sources >= 1.0.0-* could not be resolved.

To setup your machine with the source you can open the MVC solution and navigate to the NuGet Package Manager > Package Sources options. This can be found under the Tools > Nuget Package Manager > Package Manager Settings menu.

Add a new package source to the MyGet feed which contains the required libraries. In this case https://dotnet.myget.org/F/aspnetcore-master/api/v3/index.json worked for me.

Package Manager with MyGet feed

Issue 2 – Metadata file could not be found

I hadn’t had this issue in the past when bringing ASP.NET Core Identity source into a project but with the MVC code I was unable to build my application once adding in the source for MVC. I was getting the following error for each project included in my solution.

C:\Projects\WebApplication6\src\WebApplication6\error CS0006: Metadata file ‘C:\Projects\AspNetCore\Mvc\src\Microsoft.AspNetCore.Mvc\bin\Debug\netstandard1.6\Microsoft.AspNetCore.Mvc.dll’ could not be found

The reason for the issue is that the MVC projects are configured to build to the artifacts folder which sits next to the solution file. Our project is looking for them under the bin folder for each of the projects. This is a bit of a pain and the simplest solution I could find was to manually modify the output path for each of the projects being included into my solution.

Open up each .xproj file and modify the following line from

<OutputPath Condition="'$(OutputPath)'=='' ">..\..\artifacts\bin\</OutputPath>

to

<OutputPath Condition="'$(OutputPath)'=='' ">.\bin\</OutputPath>

This will ensure that when built the dll’s will be placed in the expected path that our solution is looking for them in.

Depending on what you’re importing this can mean changing quite a few files, 14 in the case of the MVC projects. If anyone knows a cleaner way to solve this problem, please let me know!