Migrating from .NET Framework to .NET Core The journey of re-targeting an ASP.NET Core application onto .NET Core

What better way to start the new year than with some coding? In my case I’d found a bit of time during the end of my Christmas vacation to tackle an issue on allReady I’d been wanting to work on for a while. Before getting into the issue, if you want to read more about what allReady is you can read my earlier blog post where I cover contributing to it in greater detail.

allReady was first created during the early beta’s of ASP.NET Core (at which point it was known as ASP.NET 5). At that time, due to some dependencies which did not yet support .NET Core, the application was built on top of the full .NET framework 4.6.

The first thing to discuss briefly is why and how we can run ASP.NET Core on the full traditional .NET framework. While it might be reasonable to assume that ASP.NET Core naturally needs to run on the new .NET Core framework, remember that .NET Core ultimately includes a subset of the larger full .NET framework API. Because of this, it is perfectly possible to run ASP.NET Core on the original 4.x version of the .NET framework. This is an advantage to those that want to make use of the new features and optimizations within ASP.NET Core, while also still utilising the libraries they know and love. This is a perfectly reasonable way to run ASP.NET Core but with an important limitation. By targeting the .NET framework you are not able to develop or host the ASP.NET application outside of Windows.

For the allReady project this had started to become a bit of a limitation. We had Mac users who were keen to contribute to the project, but were unable to do so without installing and running a Windows VM on their device. As we were getting to the end of some complex requirements for allReady’s v1 milestone, I took the opportunity to spend some time looking at retargeting the application to run on .NET Core. This turned out to be a bit more involved that I had first anticipated. However, we have succeeded and the code base has been proven to build and run on both Mac and Linux devices.

Updating the Solution

The first part of the process was to update the project.json file inside the web solution. The main update here was changing the framework specification to netcoreapp from net46.

project.json before:

project.json after:

We had decided as a team to continue to track the .NET Core 1.0.x LTS support stream. In short, this is a stable version of the framework, under full Microsoft support. New minor patch releases are appearing to fix major bugs or security issues, but otherwise the changes are expected to be quite limited. Our project was already targeting the latest 1.0.3 SDK and latest 1.0.x libraries for each of the components.

You’ll see above that as well as changing the framework moniker I’ve added an imports section as well. This was required in our case as we have some dependencies on some libraries which don’t yet specify a dotnet Target Moniker or .NET Standard version. It’s essentially there for backwards compatibility until the ecosystem settles down.

Handling Dependencies

Upon making the required changes to project.json, it became apparent that some of the packages we depend on could no longer be restored. The first thing I did was to use Nuget Package Manager to update all of the dependencies to their latest versions to see if any of those had been updated to support .NET Core. This helped in some of the cases but still left me with a few that were not compatible. For the time being I commented those out as well as commenting out any code using those dependencies. In our case this included the following main libraries we had issues with…

  • XZing which was being used in one place to generate a QR code image for use within the allReady phone application. This had its own dependency on System.Drawing which is not part of .NET Core.
  • LinqToTwitter which was being used in one area of the application to query for a user’s information after a sign in via Twitter.
  • GeoCoding.net which was used in a few places in order to take an address and geo code it to latitude and longitude coordinates.

With those taken care of for now (if only commenting out code was considered “fixing”), the last remaining issue for the web project was a dependency on a separate library project in our solution which contains some shared models. This library is used by both our web application and also in some Azure web jobs we have defined. As a result it needed to support both .NET Core and the .NET Framework. The solution here was to convert it to a .NET standard library. In my case I chose the lowest standard version I could find which gave me the API surface I needed, while supporting my dependent projects. This was the .NET Standard 1.3 version. The project.json for this library was changed as follows.

Before:

After:

With this done, I also had to update our test library project.json in a similar way in order to support .NET Core by changing the frameworks section and updating to the latest versions of our dependencies as with the main web project. The changes in this file were very similar to what I’ve already shown, so I won’t repeat the code here.

Wrapping missing dependencies

With the package restore now succeeding for the web project, test project and our .NET standard project my next goal was to find solutions to the libraries we could no longer use.

XZing was an easy decision as currently the phone application project is a bit stale and it was decided that we could simply remove the QRCode endpoint and this dependency as a result.

When I reviewed the use of LinqToTwitter it was only really abstracting one API call to Twitter behind a more fluent Linq syntax. We were using it to request a Twitter user’s profile information. The option I chose here was to write our own small service to wrap the call to the Twitter REST API which using HttpClient. There are some hoops to jump through in order to establish the authentication with Twitter but this ultimately didn’t prove too difficult. I won’t go into the exact details here since this post is intended to cover the general “migration” process.

The final dependent library that wasn’t able to support .NET Core was GeoCoding.net. We were using this library to perform address to coordinate lookups via Google. On review of the usages, it was again clear that we had a whole dependency for what amounted to a single call to a Google endpoint. We were already calling another part of the Google REST API directly from a custom service class, so again, the decision was to wrap up the geocoding functionality in our own code, calling the API directly. Again, the exact details are beyond my scope here, but I may cover this in a future blog post if there’s interest.

Handling API Differences

With the problem packages removed I tried to compile the solution. I was immediately hit by a wall of errors. This was initially a little daunting, but after looking through them I could see that many were quite similar and due to changes to the .NET Core API surface.

Reflection

The first of which was where we’d been using reflection. We have a couple of places in the web app that rely on reflection for wiring up our IoC container and also in some of the test classes. In .NET Core the Object.GetType() method no longer returns full type detail information, instead it returns just the type name. In order to access the full type information there is a new extension method called GetTypeInfo(). It returns TypeInfo which is similar to what Type used to return.

Before:

After:

In this example you can see that the only change is adding in the GetTypeInfo call.

Date Formatting

We also had some code calling ToLongDateString on dates. This is no longer avaiable so I switched to formatting the date using the ToString method instead.

Before:

After:

Exceptions

In a couple of places we were throwing ApplicationException which also seems to have been removed. I switched to a general Exception in this case.

Before:

After:

Final Tweaks

Once all of the compile errors were fixed we were nearly there. In order to enable the site to be run under the non IIS mode of Kestrel I updated the profiles in launchSettings.json

Before:

After:

This new profile basically tell VS to run the project (i.e. via dotnet.exe) – launching it on port 5000. Within Visual Studio you can choose whether to start via IIS Express or directly into the project via the dotnet.exe. New projects targeting .NET core include these two profiles automatically, so updating it in our project felt sensible, to make the developer experience familiar.

Conclusion

With the above steps completed the project was now compiling, tests were running and the site worked as expected. Quite a few hours of work went into the process. The longest time was spent building the new service wrappers around the 3rd party REST APIs so that I could get rid of incompatible dependencies. The actual project and code changes weren’t too painful and it was just a case of working out the API changes so I knew how to modify our code.

It was an interesting experience and once we had it tested on a Mac and Linux during the NDC London code-a-thon it was very exciting to see it working. This has made it much easier for non-Windows developers to contribute to the allReady project and is just one of the great new benefits of working with .NET Core and ASP.NET Core.

Read More

Updating an ASP.NET Core Site to the December 2016 Release How to upgrade a site on the LTS 1.0.3 version of ASP.NET Core

I run into an issue this week during what should have been a simple ASP.NET Core application update. I wanted to share my experience in case others run into similar problems. Also, I’m sure to be back here myself to remember this in the future!

On December 13th Microsoft released their second minor patch release for the LTS (Long Term Support) track of .NET Core. ASP.NET Core releases on two tracks depending on how cutting edge you want to be. LTS is the “safer” track, which will be supported and bug fixed during the support lifespan. The other track is FTS (Fast Track Support) which will be where new features appear. You can read more about this on the Microsoft Blog.

As you may be aware from reading my other posts, I’m contributing to an opensource charity project called allReady. We’re currently using the LTS track packages and at the time of writing still targeting the full .NET framework (as opposed to .NET Core). We had applied the last patch release 1.0.1 packages in September without any major problems so I was hoping for the same experience with this patch release.

The details for the release were made available in this Microsoft blog post. If you follow the links to the release notes you will see that the ASP.NET Core updates are considered version 1.0.3. This is where the versioning starts to get a little murky in my opinion. ASP.NET Core itself has a version number (now 1.0.3) which tracks general “releases” of the framework. However, the individual packages that actually make up .NET Core and ASP.NET Core also have version numbers and revisions. Those numbers don’t track with the main release version, so it starts to get a bit confusing. You won’t for example find a package for Microsoft.AspNetCore.Mvc at version 1.0.3. The latest for that package is 1.0.2.

I’ll now step through how I upgraded our project and then discuss the issue I experienced with the EF commands for entity framework. Before starting to update the project I made sure to install the latest version of the 1.0.3 SDK from the Microsoft website.

Update Package.json

This is where the first pain point came for me. It wasn’t listed specifically in the blogs posts or release notes all of the package which had updated and what the latest package versions were. So my initial solution was to turn to the VS Nuget Package Manager where I was hoping I could simply update all of the Microsoft packages to the latest versions. However, since the package manager lists the latest (non pre-release) versions, it was offering me the FTS 1.1.x versions. So a simple, upgrade all option was out of the question.

Next I went into the project.json manually planning to update each package by hand, allowing autocomplete to give me the latest versions. However autocomplete didn’t always seem to pick up the latest version number for me automatically and I was worried about missing something. So I reverted back to the Nuget Package Manager and went one by one through the Microsoft packages. I used the install dropdown to select the newest LTS version 1.0.x for each one. This was slow and manual but at least meant I knew what options I had and could be explicit in choosing the latest version i wanted.

Here’s a rundown the packages from our project.json that I needed to update and the versions number they are are now on (which should be the latest LTS release). Note that our project.json may well differ for newly generated projects so you may not have all of these packages and you may even have dependencies listed that we do not.

"Microsoft.EntityFrameworkCore.SqlServer": "1.0.2",
"Microsoft.EntityFrameworkCore": "1.0.2",
"Microsoft.ApplicationInsights.AspNetCore": "1.0.2",
"Microsoft.AspNetCore.Mvc": "1.0.2",
"Microsoft.AspNetCore.Mvc.TagHelpers": "1.0.2",
"Microsoft.AspNetCore.Authentication.Cookies": "1.0.1",
"Microsoft.AspNetCore.Authentication.Facebook": "1.0.1",
"Microsoft.AspNetCore.Authentication.Google": "1.0.1",
"Microsoft.AspNetCore.Authentication.MicrosoftAccount": "1.0.1",
"Microsoft.AspNetCore.Authentication.Twitter": "1.0.1",
"Microsoft.AspNetCore.Diagnostics": "1.0.1",
"Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Server.IISIntegration": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.2",
"Microsoft.AspNetCore.StaticFiles": "1.0.1",
"Microsoft.AspNetCore.Cors": "1.0.1",
"Microsoft.Extensions.Configuration.Abstractions": "1.0.1",
"Microsoft.Extensions.Configuration.Json": "1.0.1",
"Microsoft.Extensions.Configuration.UserSecrets": "1.0.1",
"Microsoft.Extensions.Logging": "1.0.1",
"Microsoft.Extensions.Logging.Console": "1.0.1",
"Microsoft.Extensions.DependencyInjection.Abstractions": "1.0.1",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.1",
"Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.1",
"Microsoft.AspNetCore.Mvc.WebApiCompatShim": "1.0.2",
"Microsoft.Extensions.Logging.Debug": "1.0.1",

I also had to update the dependencies for some of these in our test library, so remember to check there too.

To complete the upgrade I also adjusted the SDK version in our solution’s global.json as follows…

"sdk": {
  "version": "1.0.0-preview2-003156",
  "runtime": "clr",
  "architecture": "x86"
},

With the above changes made I was able to build and run our site via Visual Studio. Great!

It wasn’t until a couple of days later when I hit an issue. During an issue I was working on I’d updated our model classes Entity Framework and needed to build my next entity framework migration. I did this by running the usual command…

dotnet ef migrations add AddNotifications

After building the project I was faced with the following error:

Could not load file or assembly 'Microsoft.EntityFrameworkCore, Version=1.0.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

At this stage I tried a few things, none of which fixed the problem outright, although they may have contributed to the overall solution. Firstly I cleaned my solution and rebuilt, no joy. Then I wondered if Nuget had cached any incorrect versions of the package, so I cleared my local Nuget cache and tried restoring my project dependencies again. Still no joy! Finally I hopped onto the ASP.NET Core Slack channel and sought help there. It was with huge thanks to Chad Tolkien then he suggested a manual deletion of my bin and obj folders within the project. I did that and rebuilt the solution. Success! Finally I was able to generate a migration using the EF CLI tooling. So it seems the clean and restore steps previously hadn’t cleaned everything they needed to.

I’d love to know if there’s a better way to manage these updates currently? I’m hoping that with the final tooling release and VS 2017 things will get easier. It would be useful for example, to be able to choose which track you want to use within Nuget Package Manager. I’m not sure how that would be achieved exactly but it would distinguish the packages you really want to get the latest within your chosen support track. It would also be handy if Microsoft blog posts about each release include specific details of each updated package and it’s latest version number. Having a quick reference when updating dependencies would have made my life a little easier. There are some release notes which hint at the main packages and their new version number, but it didn’t include all components that I ended up changing.

Read More

Developers working hard during the code-a-thon

Running a Humanitarian Toolbox Code-A-Thon

I’ve been involved with the Humanitarian Toolbox allReady project for about one year now. I’ve really enjoyed being able to contribute to the cause, while learning plenty along the way. Contributing has made me a better programmer and exposed me to new libraries and techniques which I am already benefiting from in my day job and in other projects.

During my time as a contributor I’ve be aware of the code-a-thons which have taken place in the US, Canada and more recently Europe. They’ve always seemed like great events and I’m excited to be signed up for the 2-day code-a-thon in London next month. A few months ago I discussed my experience with allReady and my wiliness to attend the London event with the management at my employer, Madgex Ltd. They were very receptive and were keen to support the cause. In fact, they are now funding 4 developers (including myself) to attend the full two days in London. Madgex are covering the transportation costs and hotel accommodation as well as loosing 4 developers for 2 days so this is a generous contribution.

In addition to the NDC event, we had discussions about arranging a code-a-thon at our office in Brighton, one evening after work. I was really keen to get something planned in and allow other developers to be able to contribute and experience allReady. With the concept green lighted by senior management, we set about making the idea a reality.

Planning

The starting point was to arrange a 30 minute presentation about Humanitarian Toolbox and allReady to gauge interest and demonstrate the application. I prepared a set of slides and a short demo which I presented in our board room one lunchtime. We had a good turnout of about 16 people in the room and it planted the seed with a few of the attendees, who talked to me afterwards about getting involved.

With the awareness raised, I followed up with some emails to the Madgex staff to further determine if people would be willing to join an event locally. I got some positive indications from enough people, so we picked a date and the invites went out to the staff. We planned our initial event for one evening after work. We agreed that 3 hours would be most practical given that people had already done a full day of work. Over the next few weeks I continued to promote the idea and started to confirm attendees for the evening. We had 7 or 8 people showing good interest as we closed in on the planned date. This was all supported by our developer lead at Madgex, Steve (great name!)

1 week to go

With a week to go we started to ramp up our planning activities. Steve (developer lead), helped with the physical space we needed and secured budget for some pizzas on the night (always a great lure for developers!).

I started combing through issues on GitHub that would be suitable for new contributors. We have a lot of work on-going with the project, pushing towards our v1 release but many of the issues that are left are quite complex and require a reasonable knowledge of the codebase. The balance was finding interesting and diverse issues, whilst ensuring that they wouldn’t require too much time up-front having to learn about the entire application. I had decided to try the new project feature inside GitHub to plan and organize the event. I created a few statuses and dropped issues into the “Not Started” category. The project feature is a basic Kanban board which allows cards to be dragged between statuses as the work progresses.

I also put together a pre-requisites list for the confirmed and tentative attendees, guiding them to ensure they had the latest tooling for .NET core installed, had forked and cloned the repository and were able to get it building. This is an important activity since it’s a little time consuming and we didn’t want to spend most of the time at the code-a-thon performing the setup work. As the big day approached we started to firm up numbers so we knew what cabling infrastructure we would need.

On the day

On the morning of the event, I did a final round up of the staff to confirm final attendees. After completing this we had about 6 locally able to attend as well as one of our developers from our North American office in Toronto, Canada. I started collecting GitHub usernames so that we could add people as contributors on the repository. This isn’t absolutely required, but aids in assigning issues to specific people. We also arranged to get people added to the Humanitarian Toolbox Slack channel so they could ask questions from our project experts and the Humanitarian Toolbox founders during the event.

At 4:30pm (with 30 mins to go) I moved into the meeting room we would be using for the evening. I wanted to get the webcam setup and ensure we had the cables for power and networking. Our fantastic systems technician Ricky had beaten me to it and already sorted the cabling requirements. We tested out the remote Skype link to Luke in Canada and made sure we had everything ready. A big thanks to Ricky for volunteering his time to provide some tech support and make sure we got up an running so smoothly.

Humanitarian Toolbox Code-A-Thon Sign
You must have a nice sign when running a code-a-thon!

At 5pm our team of developers migrated like birds in the winter, to the meeting room. We had 5 developers joining me locally, plus Luke video conferencing with us from across the pond. We’d decided to relocate our desktop PCs into this common area so that it would be easier to support each other during the event. While this required a little time up-front to disconnect and reconnect the PCs, it proved a good move as I was able to answer questions, share information and demo things very easily for everyone.

Preparing for the event
Developers getting setup for the event

By about 5:20pm we were in good shape, the computers were moved, developers were settled, pizza orders had been taken (priorities people). We started with a brief Google Hangouts standup with the Humanitarian Toolbox team. James and Tony introduced the project, it’s goals and thanked everyone for taking the time at the end of their working day to stay on and code for the greater good. We also had one of the most regular project contributors Mike on the call, showing his support for the event. Being able to speak to the founders and project team made for a great start and really set the tone for a fun evening, supporting code to save lives.

It’s worth pointing out here a couple of things that James and Tony highlighted during the standup. Firstly, the initial use case for the application will be to aid the American Red Cross with the effort to install free smoke alarms in people’s homes. Already this initiative has helped save lives when disaster struck and a family’s home caught fire. Thanks to a smoke alarm installed by the Red Cross, the family were alerted to the fire and all able to evacuate safely. Also important to note is that for each hour of coding time spent on the application we can expect about 40 hours of volunteer time saved. That’s a huge return and really shows that even sparing a few hours of personal time can have a massive impact.

With the project introduced and the devs eager to get going we started the team off by finding issues people were keen to work on. Sarah, Roberto and Luke took on some unit tests, Patrick picked up a new feature requirement, while Chris dove in with some EF and migrations code for the first time. Mark one of our front end developers started looking at the homepage UI improvements. I floated around the room to answer questions, demonstrate the site functionality and to help with the GitHub flow. I really enjoyed watching new contributors getting up to speed with the code and being able to assist their learning as they progressed. It was good to be able to witness some of the common questions new contributors have so we can focus on lowering the barrier to entry in the future.

Developers working hard during the code-a-thon
Developers working hard during the code-a-thon

There were a lot of new things for everyone to learn and they did a fantastic job of absorbing the information and becoming productive quickly. All of the developers were new to GitHub and OpenSource, so there was learning to be done around the processes to ensure an up-to-date master branch, to manage rebasing and prepare pull requests. Most of the team were also experiencing ASP.NET Core for the first time, which in itself has a lot of new concepts to learn. It was also the first exposure to Entity Framework for everyone, so that had it’s own learning curve too. This really highlights another benefit of contributing to the project for developers. It’s a great learning experience that puts a real-world, production-ready code base in the hands of developers.

During the 3 hours everyone knuckled down and other than a brief break to load up our plates with some pizza, we worked solidly. I had hoped to use the GitHub project feature to keep track of things, but we hit some issues with people not being able to move the cards themselves. While I was able to do this, it proved more of a hindrance as my time was better spent helping people around the room. In the end we abandoned that feature and in hindsight a post-it-note board might have been easier to manage. I still like the concept of using GitHub so that others not physically at the event can monitor progress, but we need to find a way to allow contributors to manage the cards as they work on issues. I suspect it might just be a permissions thing, so I’ll investigate it soon.

By the end of the evening we had submitted two pull requests which were reviewed and merged into the project before we left. We also had three other issues very close to being completed which will be finished off in the coming days and hopefully submitted soon. Given the setup time, huge learning curve and relatively short coding period of three hours, I’m very pleased with this achievement. Everyone was amazed when we realised that we had hit the 8pm finish already. Time flew, which is a great sign and from feedback so far, people would have liked to have even longer with the code. Perhaps an all day event ison the cards for the future.

I really hope that everyone left feeling as positive and happy as I did. Certainly the sense I got was that everyone enjoyed learning some new things, getting to grips with the code and contributing to a good cause. I feel proud to be part of such a generous team of people who were able to join this code-a-thon after a full day of development at work first. Everyone should be very proud of what they managed to contribute. I’d love to run another session to continue the great start we’ve made and if people are willing, perhaps we can make it a regular thing or even look at a longer full day event.

From a personal perspective I enjoyed sharing what I’d learned during my time with the project and seeing other developers pick up the concepts for themselves. Prior to this, I’ve never been too keen on the idea of being a “teacher” and even presenting is not in my normal comfort zone, but I found a bit of a passion for instructing people. I’m a firm believer that by sharing information, it helps our own understanding as you are challenged to know enough to be able to articulate the concepts.

Code-a-Thon Team Photo
Code-a-Thon Team Photo (Sarah had escaped just prior to this being taken!)

Feedback

The feedback the day after has been extremely positive. I’m very happy to hear that people enjoyed themselves and had a positive and fun experience. It’s nice to spend time coding for fun, outside of the normal day-to-day work. Being able to put your skills to use towards such a positive concept is also very rewarding. From a quick survey afterwards, the team are keen to continue to contribute to the project and would like to take part in another code-a-thon in the future.

Thanks again to everyone who took part:

Our developers: Sarah, Patrick, Chris, Mark, Roberto and Luke
Tech support: Ricky
Planning and management: Steve K
Humanitarian Toolbox Support: James, Tony and Mike

Read More

Loading Pin on Bing Maps from ASP.NET Core MVC Data

As I’ve covered a few times previously in my blog I’m really enjoying working on the allReady project which is run by the Humanitarian Toolbox non-profit organisation. One of the great things about this project from a personal perspective is the chance to learn and develop my skills, whilst also contributing to a good cause.

Recently I picked up an issue which was not in my normal comfort zone, where the requirement was to load a Bing map showing a number of pins relating to request data coming from our MVC view model. I’m not very experienced with JavaScript and tend to avoid it whenever possible, but in this case I did need to use it to take data from my ASP.NET Core model to then populate the Bing maps SDK. As part of the requirement we needed to colour code the pins based on the status of the request.

My starting point was to have a look at the SDK documentation available at http://www.bing.com/api/maps/mapcontrol/isdk. After a bit of reading it looked possible to meet the requirement using the v8 SDK.

The first step we to update our Razor view page to include a div where we wanted to display the map. In our case I had decided to include a full width map at the bottom of the page so my containing div was as follows:

<div id="myMap" style="position:relative;width:100%;height:500px;"></div>

The next step was to include some JavaScript on the page to use the data from our view model to build up the pushpin locations to display on the map. Some of the existing map logic we have on the allReady project is stored inside a site.js file. This means we don’t need to include too much inline code on the page itself In my case the final code was as follows:

@section scripts {
    <script type='text/javascript'
            src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap'
            async defer></script>
    <script type='text/javascript'>
        function GetMap() {
            renderRequestsMap("myMap", getLocationsFromModelRequests());
        };

        function getLocationsFromModelRequests() {
            var requestData = [];
            @foreach (var request in Model.Requests){
                @:var reqData = {lat:@request.Latitude, long:@request.Longitude, name:'@request.Name', color:'blue'};

                if (request.Status == RequestStatus.Completed)
                {
                    @:reqData.color = 'green';
                            }

                if (request.Status == RequestStatus.Canceled)
                {
                    @:reqData.color = 'red';
                }

                @:requestData.push(reqData);
                        }
            return requestData;
        }
    </script>
}

This renders two script blocks inside our master _layout page’s scripts section which is rendered at the bottom of the page body. The first script block simply brings in the Bing map code. The second block builds up the data to pass to our renderRequestsMap function in our site.js code (which we’ll look at later).

The main function here is my getLocationsFromModelRequests code which creates an empty array to hold our requestData. I loop over the requests in our MVC view model and first load the latitude and longitude information into a JavaScript reqData object. We set a name which will be displayed under the pin using the name of the request from our Model. We also set the color on this object to blue which will be our default colour for pending requests when we draw our map.

I then update the color property for our two other possible request statuses. Green for completed requests and red for cancelled requests. With the object now complete we push it into the requestData array. This forms the data object we need to pass into our renderRequestsMap function in our site.js.

The relevant portions of the site.js that result in producing a final map are as follows:

var BingMapKey = "ENTER_YOUR_KEY_HERE";

var renderRequestsMap = function(divIdForMap, requestData) {
    if (requestData) {
        var bingMap = createBingMap(divIdForMap);
        addRequestPins(bingMap, requestData);
    }
}

function createBingMap(divIdForMap) {
    return new Microsoft.Maps.Map(
        document.getElementById(divIdForMap), {
        credentials: BingMapKey
    });
}

function addRequestPins(bingMap, requestData) {
    var locations = [];
    $.each(requestData, function (index, data) {
        var location = new Microsoft.Maps.Location(data.lat, data.long);
        locations.push(location);
        var order = index + 1;
        var pin = new Microsoft.Maps.Pushpin(location, { title: data.name, color: data.color, text: order.toString() });
        bingMap.entities.push(pin);        
    });
    var rect = Microsoft.Maps.LocationRect.fromLocations(locations);
    bingMap.setView({ bounds: rect, padding: 80 });
}

The renderRequestsMap function takes in the id of the containing div for the map and then our requestData array we built up in our Razor view. First it calls a small helper function which creates a bing map object targeting our supplied div id. We then pass the map object and the request data into addRequestPins.

addRequestPins creates an array to hold the location data which we build up by looping over each item in our request data. We create a Microsoft.Maps.Location object using the latitude and longitude and add that to the array (we’ll use this later). We then create a Microsoft.Maps.Pushpin which takes the location object and then a pushpin options object. In the options we define the title for the pin and the color. We also set the text for the pin to a numeric value which increments for each pin we’re adding. That way each pin has a number which corresponds to its position in our list of requests. With all of the pushPin data populated we can push the pin into the map’s entities array.

Once we’ve added all of the pins the final step is to define the view for the map so that it centers on and displays all of the pins we have added. I’ve achieved that here by defining a rectangle using the Microsoft.Maps.LocationRect.fromLocations helper function. We can then call setView on the map object, passing in that rectangle as the bounds value. We also include a padding value to ensure there is a little extra space around the outlying pins.

With these few sections of JavaScript code when we load our page the map is displayed with pins corresponding to the location of our requests. Here is what the final map looks like within our allReady application.

Resulting Bing Map

Read More

GitHub Contributor Tips and Tricks (Gitiquette) My thoughts of being a better GitHub citizen

I made by first GitHub pull request back in November 2015. After a code review I needed to make some changes to the code and the learning curve of Git/GitHub bit me. I had to close my PR and open a fresh one in order to get my code accepted since I couldn’t figure out rebasing. Now a year on, I have had over 100 pull requests accepted on GitHub and have even started helping with code reviews on the allReady project. The journey over the last year started with a steep curve as I got to grips with GitHub and to some extent Git itself.

Back in February in my post about contributing to allReady I briefly covered some rebasing steps useful when contributing on GitHub. In this blog post I wanted to share some further tips and tricks, plus offer my personal views and conventions I follow while attempting to be a good GitHub citizen. I can’t claim to be an expert in GitHub etiquette and I’m still learning as I go but hopefully my experiences can help others new to GitHub to work productively and reduce the pitch of the learning curve a little.

Contributing on GitHub

There is a great post from another allReady contributor and ASP.NET monster Dave Paquette at http://www.davepaquette.com/archive/2016/01/24/Submitting-Your-First-Pull-request.aspx. Dave has covered the main steps and commands needed to work with GitHub and Git in general when contributing to opensource projects. I suggest you read his post for a detailed primer of the commands and GitHub UI.

Git / GitHub Etiquette (Gitiquette)

Rather than repeat advice and steps that Dave has given, in this post I intend to try and share small tips and tricks that I hope allow you to get the best out of opensource and GitHub. These are by no means formal rules and will differ from project to project, so be aware of that when applying them. As an aside; I was quite proud of inventing the term gitiquette before I Googled it and found other people have already used the term!

Raising GitHub Issues

If you find a bug or have an enhancement suggestion for a project, the place to start is to raise an issue. Before doing so it’s good practice to search for any issue (open or closed) that might address a similar point. Managing issues is a large part of running a project on GitHub and it’s good etiquette not to overload the project owners by repeating something that has already been asked and answered. This frees up the owners to respond on new issues.


If you can’t find a similar issue then go ahead and raise one. Try to give it a short descriptive title, summarising the issue or area of the application you are referring to. Try to choose something easy and likely to be searched so that anyone who is checking for related issues can easily find it and avoid repeating a question/bug.


If your issue is more of a question than a bug it can be useful to mark it as such by putting “(question)” or “(discussion)” in the title. Each project may have their own convention around this, so check through some prior issues first and see if there is a preferred pattern in use.


In the content of the issue, try to be as detailed (but also specific) as possible to allow others to understand the bug. Include steps to reproduce the problem if it is a bug. Also try to include screenshots where applicable to share what you’re seeing. If your issue is more of a general question, give enough detail to allow others to understand the reason behind your question. For suggestions, try to back them up with examples of why you believe what you’re suggesting is a good idea to help others make informed arguments for or against.


Keep your language polite and constructive. Remember that a lot of opensource projects are driven by very small teams or individuals around their paid jobs and families. In my opinion, issues are not the right place to complain about things but a place to register problems with a view to getting them solved. “Speak” how you would like to be spoken to.


Once you’ve raised an issue remember that the project leaders may take a while to respond to it, especially if they are a small or one person team. Give them a reasonable period of time to respond and after that time has passed, consider adding a comment as a gentle reminder. Sometimes new issue alerts can be missed so it’s usually reasonable to prompt if the issue goes quiet or has no response.


Each project may have their own process to respond to and to triage issues so try to familiarise yourself with how other issues have been handled to get an idea of what to expect. In some cases issues may be tagged initially and then someone may respond with comments later on.


Once someone responds to your issue, try to answer any follow up questions they have as soon as possible.


If the project owner disagrees with your suggestion or that a bug is actually “by design” then respect their position. If you feel strongly, then politely present your point of view and discuss it openly but once a final decision is made you should respect it. On all of the issues I’ve seen I can’t think of any occasion where things have turned nasty. It’s a great community out there and respect plays a big part in why things function smoothly.


When you start to work on an issue it’s a good idea and helpful to leave a comment on the issue so that others can quickly see that it’s being worked. This avoids the risk of duplicated efforts and wasted time. In some cases the project owners may assign you to the issue as a way of formally tracking assignments.


If you want to work on an issue but are not sure about your potential solution, consider first summarising your thoughts within the issue’s comments and get input from the project owners and other contributors before starting. This can avoid you spending time going down a route not originally intended by the project owners.


If you feel an issue is quite large and might be better broken out into smaller units of work then this can be a reasonable approach. Check with the project how they prefer to handle this but often it’s reasonable to make a sub issue(s) for the work, referencing back to the master original issue so that GitHub provides links between them. Smaller PR’s are often easier to review and merge so it’s often reasonable to break work down like this and quite often preferred.


If someone has said that they are working on an issue but they haven’t updated it or submitted a PR in some time, first check their GitHub fork and see if you can find the work in progress. If it’s being updated and progressing then they are probably still working on the issue. If not, leave a comment and politely check if they are still working on it. It’s not unusual for people to start something with the best intentions and then life comes along and they don’t have time to continue.


If you stop working on an issue part of the way through and may not be able to pick it up for a while, update the issue so others know the status. If you think you will be able to get back to it, then try to give a time frame so that the project owners can determine if someone else might need to pick it up from you. If this is the case you might want to share a link to your fork and branch so others can see what you’ve done so far. Also remember that the project may accept a partial PR as a starting point to resolving the issue. If your code could be committed without breaking the main code base, offer that option.

Git Commits

Next I want to share a few thoughts on best practices to follow when making local commits while working on an issue:

Remember to make a branch from master before starting work on a new issue. Give it a useful name so you can find it again. I personally have ended up using the issue number for most of my branches. It’s a personal choice though, so use what works for you.


It’s not unreasonable to make many work in progress commits locally whilst working on an issue as you progress through it. It can though, be helpful to squash them down before submitting your final PR (see below for a couple of techniques I use to squash my commits).


Your final commit messages that will be included in your PR should be clear and appropriately worded. Try to summarise the commit in the first line and then provide additional detail after a line break and empty line.

GitHub Pull Requests

After you’ve finished working on an issue you’re ready for a pull request. Again I’m sharing my personal thoughts on PRs here and each person and project will differ in style and preference. The main thing is to respect the convention preferred by the project so that they can easily review and accept your work.

It’s a good practice to limit a PR to a single issue. Don’t try to solve too much in one request as it makes reviewing and testing it harder for the project owners. If you spot other bugs along the way then open new issues for them as you go. You can then include the fixes in separate PR(s).


Before submitting your PR I feel it’s good practice to squash or tidy your working commits into logical final commits. This makes reviewing your steps and the content of the PR a bit easier for reviewers. Lots of work in progress or very granular commits can make a PR harder to interpret during code review. Including too many commits will also make the commit history for the project harder to follow once the PR is merged. I aim for 2-3 commits per PR as a general rule (sometimes even a single commit makes sense). Certainly there are exceptions to this rule where it may not make sense to group the commits. See below for advice on how to squash a commit.


I try to separate my work on the actual application code and the unit tests into two separate commits. Personally I feel that it’s then easier for a reviewer to first review the commit with your application code, then to review any tests that you wrote. Instead of viewing the whole set of files changed and wading through it, a reviewer can view each commit independently.


Ensure the commits you do include in your PR are clearly described. This makes understanding what each commit is doing easier during review and once they are merged into the project.


If your PR is a work in progress, mention this on the comment and in the title so that the team know it’s not ready for final merge. Submitting a partial PR can be good if you want to check what you’ve started is in the right direction and to get technical guidance. For example you might use “Fixing Bug in Controller – WIP” as your title. Each project may have a style they would like you to apply so check past PRs and see if any pattern already exists. Once you’re ready for it to be formally reviewed and merged you can update the title and leave a new comment for the project owners.


When submitting a PR, try to describe clearly what you’ve added or changed so that before even reviewing the code the reviewer has an idea of what you’re addressing and how. The PR description should contain enough detail for a reviewer to understand what areas to test and if your work includes UI elements, consider including screenshot(s) to illustrate the changes.


Include a line in your comment stating Fixes #101 (where 101 is the issue number). This will link the PR to the issue to ensure the issue is closed when the PR is accepted. If your PR is a partial fix and should not close the issue, use something like Relates to #101 instead so that the master issue is not automatically closed.


Depending on the PR and project it may be good practice or even required to include unit tests to prove your changes are working.


Before submitting your PR it’s a good idea to ensure your code is rebased from a current version of the master branch. This ensures that your code will merge easily and allows you to test it against the latest master codebase. In my earlier post I included steps explaining how to rebase. This is only necessary if the master branch has had commits added since you branched from it.


Once you receive feedback on your PR don’t take any suggestions or critique personally. It can be hard receiving negative feedback, especially in an open forum such as GitHub but remember that the reviewer is trying to protect the quality and standards of their project. By all means share your point of view if you have reason to disagree with comments, but do so with the above in mind. In my experience feedback has always been valuable and valid.


Once you have received feedback, try to apply any changes in a timely fashion while the review is fresh. You can simply make relevant changes in your local branch, make a new commit and push it to your origin. The PR will update to include the pushed changes. It’s better not to rebase and merge the commit with the original commits so that a reviewer can quickly see the changes. They may ask you to rebase/squash it down after review and before final acceptance to keep the project history clean.

Reviewing GitHub Pull Requests

I’ve started helping to review some of the allReady pull requests in recent months. Here are a few things I try to keep in mind:

Try to include specific feedback using GitHub’s line comment feature. You can click the small plus sign new the line of code you are commenting on so the review is easy to follow. The recent review changes that GitHub have added make reviewing code even more clear.


Leave a general summary explaining any main points you have or questions about the code.


Think of the contributors feelings and remember to keep the comments constructive. Give explanations for why you are proposing any changes and be ready to accept that your opinion may not always be the most valid.


Try to review pull requests as soon as possible while the code is still fresh in the contributor’s mind. It’s harder to come back to an old PR and remember what you did and why. It’s also nice to give feedback promptly when someone has taken the time to contribute.


Remember to thank people for their contributions. People are generously giving their time to contribute code and this shouldn’t be forgotten.

Git Command Reference

That’s the end of my long list of advice. I hope some of it was valuable and useful. Before I close I wanted to share some steps to handle a couple of the specific Git related tasks that I mention in some of the point above.

How to Squash Commits

I wrote earlier about squashing your commits – reducing a large number of small commits into a single or set of combined commits. There are two main techniques I use when doing this which I’ll now cover. I normally do some of these operations in a visual tool such as SourceTree but for the sake of this post I’ll cover the actual Git commands.

Squashing to a single commit

In cases where you have two or more commits in your branch that you want to squash into a single final commit I find the quickest and safest approach is to do a soft reset back to the commit you branched from on master.

Here’s an example set of three commits on a new branch where I’m fixing an issue…

C:\Projects\HTBox-AllReady>git log --pretty=format:"%h - %an, %ar : %s" -3

be7fdfe - stevejgordon, 2 minutes ago : Final cleanup
2e36425 - stevejgordon, 2 minutes ago : Ooops, fixing typo!
85148d7 - stevejgordon, 2 minutes ago : Adding a new feature

All three commits combined result in a fix, but two of these are really just fixing up and tidying my work. As I’m working locally I want to squash these into a single commit before I submit a PR.

First I perform a soft reset using…

C:\Projects\HTBox-AllReady>git reset --soft HEAD~3

This removes the last three commits, the number after the ~ symbol controlling how many commits to reset. The –soft switch tells git to leave all of the changed files intact (we don’t want to lose the work we did!) It also leave the files marked as changes to be committed. I can then perform a new commit which incorporates all of the changes.

C:\Projects\Other\HTBox-AllReady>git commit -a -m "Adding a new feature"
[#123 2159822] Adding a new feature
1 file changed, 1 insertion(+), 1 deletion(-)

C:\Projects\Other\HTBox-AllReady>git status
On branch #123
nothing to commit, working directory clean

C:\Projects\Other\HTBox-AllReady>git log --pretty=format:"%h - %an, %ar : %s" -1

2159822 - stevejgordon, 20 seconds ago : Adding a new feature

We now have a single commit which includes all of our work. Viewing it visually in source tree we can see this single commit more clearly.

Example of branch after a Git squash

We can now push this and create a clean PR.

Interactive Rebasing

In cases where you have commits you want to combine but you want also to control which commits are squashed together we have to do something a bit more advanced. In this case we can use interactive rebasing as a very powerful tool to rewrite our commit history. Here’s an example of 4 commits in a new branch making up a fix for an issue. In this case I’d like to end up with two final commits, one for the main code and one for the unit tests.

C:\Projects\Other\HTBox-AllReady>git log --pretty=format:"%h - %an, %ar : %s" -4

fb33aaf - stevejgordon, 6 seconds ago : Adding a missed unit test
8704b06 - stevejgordon, 14 seconds ago : Adding unit tests
01f0876 - stevejgordon, 42 seconds ago : Oops, fixing a typo
2159822 - stevejgordon, 7 minutes ago : Adding a new feature

Use the following command to start an interactive rebase

git rebase -i HEAD~4

In this case I’m using ~4 to say that I want to include the last 4 commits in my rebasing. The result of this command opens the following in an editor (whatever you have git configured to use)

pick 2159822 Adding a new feature
pick 01f0876 Oops, fixing a typo
pick 8704b06 Adding unit tests
pick fb33aaf Adding a missed unit test

# Rebase 6af351a..fb33aaf onto 6af351a (4 command(s))
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

In our case we want to squash the 2nd commit into the first and the 4th into the 3rd so we update the commit lines at the of the file to…

pick 2159822 Adding a new feature
squash 01f0876 Oops, fixing a typo
pick 8704b06 Adding unit tests
squash fb33aaf Adding a missed unit test

Saving and closing the file results in the next step

# This is a combination of 2 commits.
# The first commit's message is:

Adding a new feature

# This is the 2nd commit message:

Oops, fixing a typo

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date: Fri Sep 30 08:10:19 2016 +0100
#
# interactive rebase in progress; onto 6af351a
# Last commands done (2 commands done):
# pick 2159822 Adding a new feature
# squash 01f0876 Oops, fixing a typo
# Next commands to do (2 remaining commands):
# pick 8704b06 Adding unit tests
# squash fb33aaf Adding a missed unit test
# You are currently editing a commit while rebasing branch '#123' on '6af351a'.
#
# Changes to be committed:
# modified: AllReadyApp/Web-App/AllReady/Controllers/AccountController.cs
#

This gives us the chance to reword the final commit message of the combined 1st and 2nd commits. I will use the # to comment out the typo message since I just want this commit to read as “Adding a new feature”. If you wanted to include both messages then you don’t need to make any changes. They will both be included on separate lines in your commit message.

# This is a combination of 2 commits.
# The first commit's message is:

Adding a new feature

# This is the 2nd commit message:

#Oops, fixing a typo

I then can do the same for the other squashed commit

# This is a combination of 2 commits.
# The first commit's message is:

Adding unit tests

# This is the 2nd commit message:

#Adding a missed unit test

Saving and closing this final file I see the following output in my command prompt.

C:\Projects\Other\HTBox-AllReady>git rebase -i HEAD~4
[detached HEAD a2e69d6] Adding a new feature
Date: Fri Sep 30 08:10:19 2016 +0100
1 file changed, 2 insertions(+), 2 deletions(-)
[detached HEAD a9a849c] Adding unit tests
Date: Fri Sep 30 08:16:45 2016 +0100
1 file changed, 2 insertions(+), 2 deletions(-)
Successfully rebased and updated refs/heads/#123.

When we view the visual git history in SourceTree we can now see we have only two final commits. This looks much better and is ready to push up.

Git branch after interactive rebase

When interactive rebasing you can even reorder the commits so you have full control to craft a final set of commits that represent the public commit history you want to include.

How to pull a PR locally for review

When starting to help review PRs I needed to learn how to pull down code from a pull request in roder to run it locally and test the changes. In the end it wasn’t too complicated. The basic structure of the command we can use looks like…

git fetch origin pull/ID/head:BRANCHNAME

Where origin is the name of the remote where the PR has been made. ID is the pull request id and branchname is the name of the new branch that you want to create locally. On allReady for example I’d use something like…

git fetch htbox pull/123/head:PR123

This pulls PR #123 from the htbox remote (this is how I’ve named the remote on my system which points to the main htbox/allready repo) into a local branch called PR123. I can then checkout the branch using git checkout PR123 to test and review the code.

In Summary

Hopefully some of these tips are useful. In a lot of cases I’m sure they are pretty much common sense. A lot of it is also personal preference and may differ from project to project. Hopefully if you’re just starting out as a GitHub contributor you can put some of them to use. GitHub for the most part is a great community and is a great place to learn and develop your skills.

I’d love to be able to acknowledge the numerous fantastic blog posts, and Stack Overflow threads that helped me along my way; unfortunately I didn’t keep notes of the sources as I learned things on my 1 year journey. A big thanks none-the-less to anyone who has contributed guidance on Git and GitHub. Every little bit of shared knowledge is a big help to developers who are learning the ropes. It truly is a journey as your contribute more and more on GitHub it gets easier! Happy GitHub-ing!

Read More

Contributing to allReady A charity open source project from Humanitarian Toolbox

In this post I want to discuss a fantastic open source project called allReady that I highly encourage all ASP.NET developers to check out. I want to share my early experience with allReady and how I got started with contributing to an open source project for the first time.

What is allReady?

allReady is project developed and managed by the charity organisation Humanitarian Toolbox. It is designed to assist management of community preparedness campaigns, bringing together the campaign organisers and volunteers to make managing the campaign easier and more efficient for all involved. It’s currently in a private preview release and is being trialled and tested by the American Red Cross with a campaign to install smoke alarms within homes in the Chicago area. Once the pilot is completed it will be available for many other important campaigns.

The project is developed using ASP.NET Core 1.0 (formerly ASP.NET 5) and uses Entity Framework Core (formerly EF7) for data access. It’s live preview sites are hosted in Microsoft Azure.

To summarise the functionality; allReady is a web application which hosts campaigns and their associated activities. The public can view campaigns and volunteer to help with activities where they have the appropriate skills. Activities may have goals such as to install a certain number of smoke alarms in a given area by a certain date. Campaign organisers can assign tasks to the volunteers and track the progress of the activity that has taken place. By managing the tasks in this way it allows the most suitable resources to be aligned with the work required.

Contributing to allReady

I first heard about the project a few months ago on the DotNetRocks podcast, hosted by Carl Franklin and Richard Campbell and it sounded interesting. I headed over to the Humanitarian Toolbox website and their allReady GitHub repository to take a deeper look. Whilst I had played around with some features of ASP.NET Core and read/watched a fair amount about it, I’d yet to work with a full ASP.NET Core project. So I started by spending some time looking at the code on GitHub and working out how it was put together.

I then spent a bit of time looking through the issues, both closed ones and new ones to get a feel for the direction of the project and the type of work being done. It was clear to me at this point that I wanted to have a go at contributing, but having never even forked anything on GitHub I was a bit unsure of how to get started. I must admit it took me a few weeks before I decided to bite the bullet and have a go with my first contribution. I was a little intimidated to start using GitHub and jumping into an established project. Fortunately the GitHub readme document for allReady gave some good pointers and I spent a bit of time on Google learning how to fork and clone the repository so that I could work with the code.

I was going to spend some time in this post going through the more detailed steps of how to fork a repository and start contributing but in January Dave Paquette posted an extensive blog post covering this in fantastic detail. If you want a great introduction to GitHub and open source contributions I highly recommend that you start with Dave’s post.

It can be a bit daunting knowing where to start and opening your code up to public review – certainly that’s how I felt. I personally wasn’t sure if my code would be good enough and didn’t want to make any stupid mistakes, but after cloning down the code and playing around on my local machine I started to feel more confident about making some changes and trying my first pull request.

I took a look through the open issues and tried to find something small that I felt I could tackle for my first pull request. I wanted something reasonably simple to begin with while I learned the ropes. I found an issue requiring some UI text to display the password requirements for new account registrations so I started working on the code for that. I got it compiling locally, ran the tests and submitted my first pull request (PR). Well done me!

It was promptly reviewed by MisterJames (aka James Chambers) who welcomed me to the project. At this point it’s fair to say that I’d gone a bit off tangent with my PR and it wasn’t quite right for what was needed. Being brutally honest, it wasn’t great code either. James though was very kind in his feedback and did a good of explaining that although it wasn’t quite right for the issue at hand, he’d be able to help me adjust it so that it was. An offer of some time to work remotely on the code wasn’t at all what I hadn’t expected and was very generous. James very quickly put some of my early fears to rest and I felt encouraged to continue contributing.

The importance of this experience should not be underestimated, since I’m sure that a lot of people may feel worried about making a pull request that is wrong either in scope or technically. My experience though quickly put me at ease and made it easy to continue contributing and learning as I went. The team working on allReady do a great job of welcoming and inducting new contributors to the project. While my code was not quite right for the requirement, I wasn’t made to feel rejected  or humiliated and help was offered to make my code more suitable. If you’re looking for somewhere friendly to start out with open source, I can highly recommend allReady.

Since then I’ve made a total of 23 pull requests (PRs) on the project 22 of which are now closed and merged into the project. After my first PR I picked up some more issues that I felt I could tackle, including a piece of work to rename some of the entities and classes around a more relevant ubiquitous language. It’s really rewarding to have code which you’ve contributed make up part of an open source project and I feel it’s even better when it’s for such a good cause. As my experience with the codebase, including working with ASP.NET Core has developed I have been able to pick up larger and more complex issues. I continue to learn as I go and hopefully get better with each new pull request.

Challenges

Some people will be worried about starting out with open source contributions, but honestly I’ve had no bad experiences with the allReady project. I do want to discuss a couple of areas that could be deemed challenging and perhaps are things which might be putting others off from contributing. I hope in doing so I can set any concerns people may have aside.

The area that I found most technically challenging early on was working with Git and GitHub. I’d only recently been exposed to Git at work and hadn’t yet learned how to best use the commands and processes. I’d never worked with GitHub so that was brand new to me too. Rebasing was the area that as first was a bit confusing and daunting for me. This post isn’t intended to be a full git or rebasing tutorial but I did feel it’s worth briefly discussing what I learned in this area since others may be able to use this when getting started with allReady.

Rebasing 101

Rebasing allows us to take commits which have been made by other authors (or yourself on other branches) and replay our commits on top of them. It allows us to keep the base of our work up-to-date and ensure that any merge conflicts are handled before a pull request is submitted/accepted.

I follow the practice of creating a branch for each issue which I start work on. This allows me to keep that work separate and is also required in order to submit a pull request on GitHub. Given that a feature might take a number of days to complete, it’s likely that the project master branch will have moved on by the time you are ready to submit your PR. You could pull in the master branch changes and then merge them into your branch but this leads to a quite messy commit history and in the case of one of my PRs, didn’t work well at all. Rebasing is your friend here and by rebasing your issue branch on top of the up-to-date master you can ensure that the commit timeline is correct and that all of your changes work with the latest code. There are occasions where you’ll need to handle conflicts as the rebase occurs, but often a rebase can be a pretty simple exercise.

Dave Paquette’s post which I highlighted earlier covers all of this, so I recommend you read that for some great guidance first. I thought it might be useful to share my cheat sheet that I noted down a few months ago and which I personally found handy in the early days until I had memorised the flow of commands. Out of context these may not make sense to Git newcomers but hopefully after reading Dave’s guide you may find these a nice quick reference to have to hand.

git checkout master
git fetch htbox
git merge htbox/master
git checkout issue-branch-1
git rebase master
git push origin issue-branch-1 -f

To summarise what these do:

First I checkout the master branch and fetch any changes from the htbox remote, merging them into my master branch. This brings my local repository up-to-date with the project code on GitHub. When working with a GitHub project you’ll likely setup two remotes. One is to your forked GitHub repository (origin in my case) and one is to the main project repository (which I named htbox). The first three commands above will update my local master branch to reflect the current project master branch.

I then checkout my issue branch in which I have completed my work for the issue. I then rebase from my updated local master branch. This will rewind your issue branch’s changes, update with the master branch commits and then reply each of your branch’s commits onto the updated base (hence the term rebasing). If any of your commits conflict with the master’s changes then you will have to handle those merge conflicts before the rebase operation can continue.

Finally, once my issue branch is rebased, my feature re-tested to ensure that it still works as expected and then unit tests all running green I can push my branch up to my forked repository hosted on GitHub. I tend to push only my specific issue branch and will sometimes require the -f force flag to ensure that the remote fork takes all of my changes exactly as they appear locally. Forcing is most common in cases where I’m updating an existing PR and have had to rebase a second time based on more changes to the master branch.

This leaves me ready to submit a pull request or, if I already have a PR submitted for my branch, GitHub will update that existing PR with my new commits. The project team will be happy as this will make accepting and merging the PR an easier task after the rebase as any conflicts with the current master will have been resolved.

Whilst understanding the steps required to rebase was a learning curve for me, it was in the end, easier than I had feared it might be. Certainly if you’re familiar with Git before you start then you’ll have an easier time, but I wouldn’t let it put you off if you’re a complete newcomer. It is in fact a great chance to learn Git which will surely be useful in future projects.

Finding Time

Another challenge that I feel worth mentioning, since I’m sure many will consider it true for them too, is finding time to work on the project. Life is busy and personally finding time to work on the code isn’t always that easy for me. I don’t have children, so I do have more time than those with little ones to take care of, but outside of work I like to socialise with friends, run a side photography business with my wife, play sports and enjoy the outdoors. All things which consume most of my spare time. However I really enjoy being a part of this project and so when I do find myself with spare time, often early in the morning before work, during the weekend or sometimes even during my lunch break, I try to tackle an issue for allReady. There are a range of open issues, some large in scope, some smaller; so often you can often find something that you can make time to work on. No one puts pressure on the completion of work and I believe everyone is very appreciative of any time people are able to contribute. I do recommend that you be realistic in what you can tackle but certainly don’t be put off if you’ll pick up pieces of work as and when you can. If you do start something but find yourself out of time, I recommend you leave a short comment on the issue so that other contributors know what’s happening and when you might be able to pick it up again.

Sometimes, with larger issues it might make sense for them to be broken down into sub issues, so that PR’s can be submitted for smaller pieces of work. This allows the larger goals to be achieved but in a more manageable way. If you see something that you want to help with, leave a comment and start a discussion. Again the team are very approachable and quick to respond to any questions and comments you may have.

Time is valuable to us all and therefore it’s a great thing to donate when you can. Sharing a little time here and there on a project such as allReady can be really precious, and however small a contribution, it’s sure to be gratefully received.

Benefits

Having touched on a few possible challenges I wanted to move onto the benefits of contributing which I think far outweigh those challenges.

Firstly and in my opinion, most importantly, there is the fact that any contribution will be towards an application that will be helping others. If you have time to give an open source project, this is one which really does represent a very worthwhile cause. One of the goals of Humanitarian Toolbox is to allow those with software development skills to put their knowledge and experience directly towards charitable goals. It’s great to be able to use my software development skills in this way.

Secondly it’s a great learning experience for both new and experienced developers. With ASP.NET Core in RC1 currently and RTM perhaps only a few more months away this is a great opportunity to work with the new framework and to learn in a practical way. Personally I’ve learnt a lot along the way, including seeing the Mediatr library being used. I really like the command/query pattern for data access and I have already used it on a work project. There are a number of experienced developers on the team and I learn a lot from the code reviews on my pull requests and watching their commits.

Thirdly, it’s a very friendly project to be involved with. The team have been great and I’ve felt very welcomed and involved in the project. Some of the main contributors are now part of the .NET monsters on Channel 9. It’s great to work with people who really know their stuff. This makes it a great place to start out with open source contributions, even with no prior experience contributing on GitHub.

Code-a-thon

On the 20th February Humanitarian Toolbox held a code-a-thon at two physical locations in the US and Canada as well as some remote contributions from others on the project. I set aside my day to work on some issues from the UK. It was great being part of a wider event, even if remote. I recommend that you follow @htbox on twitter for news of any future events that you can take part in. If you’re close enough to take part physically then it looked like good fun during the live link up on Google Hangouts. As well as the allReady project people were contributing to other applications with charitable goals such as a missing children’s app for Minnesota. You can read more about the event in Rocky Lhotka’s blog post.

How you can help and get started?

No better way than to jump into the GitHub project and start contributing. Even non developers can get involved by helping test the application, raising any issues that they experience and providing suggestions for improvements. If you have C# and ASP.NET experience I’m sure you’ll quickly get up to speed after checking out the codebase. If you’re looking for good issues to ease in with then check out any tagged with the green jump-in label. Those are smaller, simpler issues that are great for newcomers to the project or to GitHub in general. Once you’ve done a few fixes and pull requests for those issues you’ll be ready to take a look at some of the more complex issues.

If you need help with getting started or are unsure how of how to contribute then the team will be sure to offer help and advice along the way.

Summary of links

As I’ve mentioned and included quite a lot of links in this post, here’s a quick roundup and a few others I thought would be useful:

http://htbox.org

http://htbox.github.io/

http://www.davepaquette.com/archive/2016/01/24/Submitting-Your-First-Pull-request.aspx

https://github.com/htbox/allready

https://www.youtube.com/channel/UCMHQ4xrqudcTtaXFw4Bw54Q – Community Standup Videos

http://www.lhotka.net/weblog/HTBoxTwinCitiesCodeathonFeb2016Recap.aspx

Read More