ASP.NET Core Anatomy (Part 3) – UseMvc Dissecting and understanding the internals of ASP.NET Core

In parts one and two of this series I looked at the two main IServiceCollection extensions (AddMvcCore and AddMvc) in ASP.NET Core. These are used to add in the required MVC services when you plan to use the MVC middleware in your application.

The next thing you are required to do to enable MVC in your ASP.NET Core application is to include the UseMvc IApplicationBuilder extension from the Configure method of your Startup class. This method registers the MVC middleware into your application pipeline so that the MVC framework can handle requests and return the appropriate response (usually a view result or some JSON). In this post I will take a look at what happens when the UseMvc method is called during the application startup.

As with the earlier parts of this series, I’m currently using the rel/1.1.2 codebase from the MVC repository for my investigations. I’m still using the slightly older project.json based code as at the moment there don’t seem to be simple ways to debug into multiple ASP.NET Core source code assemblies in VS2017.

UseMvc is an extension on the IApplicationBuilder which takes an Action delegate of IRouteBuilder. The IRouteBuilder will be used to configure the routing for MVC. There is a more basic variant of the UseMvc method which does not required any parameters. This method simply calls down into the main version, passing in an empty delegate like so…

The main UseMvc method looks like this…

Let’s dissect what this method does. First it does a check to see if the IServiceProvider has an MvcMarkerService service registered in it. To do this it uses the ApplicationServices property which exposes the current IServiceProvider. Calling GetService on the service provider will try to locate a registered implementation for the requested Type. The MvcMarkerService is registered when AddMvcCore executes and as a result, if it is not found, it indicates that either AddMvc or AddMvcCore was not called during ConfigureServices. Marker services like this are used in quite a few places to help calling code verify proper registration of fundamental dependencies before they try to execute. The MvcMarkerService is just an empty class with no actual code.

Next, UseMvc requests a MiddlewareFilterBuilder from the ServiceProvider and sets its ApplicationBuilder using the IApplicationBuilder.New() method. The ApplicationBuilder creates and returns a new instance copy of itself when this method is called.

Next, UseMvc initialises a new RouteBuilder, setting the default handler to the registered MvcRouteHandler. At this point, the DI magic really kicks in and a bunch of dependencies start getting instantiated. MvcRouteHandler is requested and because it has some dependencies in its constructor, various other classes get initialised. Each constructor for these classes can also require additional dependencies which then also get created. This is a great example of how the DI system works. While all of the interfaces and concrete implementations are registered in the container during ConfigureServices, the actual objects are only created at the point they are required to be injected into something. By new-ing up the RouteBuilder, this snowball of object creation begins to occur, until all of the required dependencies have been constructed. Some of these are registered as singletons and will then live for the lifetime of the application, being returned whenever they are requested from the ServiceProvider. Others, might be constructed for each request or in some cases, constructed uniquely every time they are required. Understanding the details of how dependency injection works and specifically the ASP.NET Core ServiceProvider is out of scope for this post.

After creating the RouteBuilder, the Action<IRouteBuilder> delegate action is called which takes the RouteBuilder as its only parameter. In the MvcSandbox sample that I’m using to debug and dissect MVC we call the UseMvc method, passing in a delegate function via a lambda. This function maps a single route named “default” as follows.

This calls the MapRoute method which is an extension on the IRouteBuilder. The main MapRoute method looks like this:

This requests an instance of an IInlineConstraintResolver which is resolved by the ServiceProvider to a DefaultInlineConstraintResolver. This class takes an IOptions<RouteOptions> in its constructor.

The Microsoft.Extensions.Options assembly, as part of its work, calls into the MvcCoreRouteOptionsSetup.Configure method which takes a RouteOptions parameter. This adds a new constraint map of type KnownRouteValueConstraint to the Dictionary of constraint maps. This Dictionary is initialised with a number of default constraints when the RouteOptions is first constructed. I’ll look into the routing code in a little more detail in a future post.

A new Route object is constructed using the name and template provided. In our case this is added the RouteBuilder.Routes List. By this point, running the sample MvcSandbox application, our route builder now has a single route added.

Note that the MvcApplicationBuilderExtensions class also includes an extension method called UseMvcWithDefaultRoute. This method includes a call to UseMvc, passing in a hardcoded default route.

This route is defined with the same name and template as defined in the MvcSandbox application. Therefore it could have been used as a slight code saver within the MvcSandbox Startup class. For the most basic of MVC applications it’s possible that using this helper extension method might be enough for your application routing requirements. In most real-world cases I suspect you’ll end up passing in a more complete customised set of routes. It’s a nice shorthand though if you want to start with the basic routing template. It’s also possible to call UseMvc() without a parameter and instead use the attribute routing approach. 

Next, the static AttributeRouting.CreateAttributeMegaRoute method is called and the resulting route is added to the Routes List (at index zero) on the RouteBuilder. CreateAttributeMegaRoute is documented with the summary “Creates an attribute route using the provided services and provided target router.” and it looks like this:

This method, creates a new AttributeRoute which implements the IRouter interface. Its constructor looks like this:

In its constructor it takes an IActionDescriptorCollectionProvider, an IServiceProvider and a Func which takes an array of ActionDescriptor’s and which returns an IRouter. By default, with the AddMvc registered services, it will get an instance of ActionDescriptorCollectionProvider which is a singleton object registered with the ServiceProvider. ActionDescriptors represent the available MVC actions that have been created and discovered within the application. We’ll look at those another time.

The code (inside the CreateAttributeMegaRoute method) which creates the new AttributeRoute uses a lambda to define the code for the Func<ActionDescriptor[], IRouter>. In this case the delegate function requests an MvcAttributeRouteHandler from the ServiceProvider. Because this is registered as transient, the ServiceProvider will return a new instance of an MvcAttributeRouteHandler each time one is requested. The delegate code then sets the Actions property (an array of ActionDescriptions) on the MvcAttributeRouteHandler using the incoming ActionDescriptions array and finally the new handler is returned.

Back to UseMvc; it finishes by calling the UseRouter extension on the IApplicationBuilder. The object passed into UseRouter is created by calling Build on the RouteBuilder. This method will add the routes into the RouteCollection. Internally the RouteCollection keeps track of which routes are named and which are unnamed. In our MvcSandbox example we end up with a named “default” route and an unnamed AttributeRoute.

UseRouter has two signatures. In this case we’re passing in a IRouter so the following method is called:

This does a check for a RoutingMarkerService within the ServiceProvider. Assuming this is registered as expected  it adds the RouterMiddleware into the middeware pipeline. It is this middleware which will use the IRouter to try and match up requests to the controller and action within MVC which should handle them. The details of this process will feature in a future blog post.

At this point the application pipeline is configured and the application is ready to start receiving requests. That’s a good point to stop for this post.

Summary

In this post we’ve looked at UseMvc and seen that it sets up the MiddlewareFilterBuilder which will be used later. It then also gets an IRouter via a RouteBuilder and much of the work at this stage is around registering routes. Once everything is setup, the routing middleware is registered into the pipeline. It’s this code that knows how to inspect incoming requests and map the paths to the appropriate controllers and actions that can handle them.

Other Posts In This Series

ASP.NET Core Anatomy (Part 2) – AddMvc Dissecting and understanding the internals of ASP.NET Core

Welcome to part 2 of my series where I dissect the internals of the ASP.NET Core source code. Eagle eyed readers will have noticed a slight change to my title for this post. I decided to drop MVC from it. While I’m mostly interested in MVC anatomy at this point, I’ve realised that as I work through the flows, I’m stumbling into features which are part of the main ASP.NET Core framework, such as IRouter. Therefore, it feels sensible to allow myself scope to drift into discussing those parts and as a result, ASP.NET Core Anatomy is a more accurate series title.

If you watched the ASP.NET Community Standup this week, you might have heard Jon Galloway suggesting that these posts be consumed along with a good cup of coffee or tea. So before I begin, I’ll give you time to get yourself a drink!…

Right; all set? In part 1, I looked at what happens inside the AddMvcCore extension method. In this post I want to look at the AddMvc extension method. As in part 1 I am using the rel/1.1.2 tagged code for ASP.NET Core MVC. Things can and will likely change in the code base in the future. Therefore, consider this general guidance which should only be considered current if you are using the same version of the MVC packages. My starting point again is the MvcSandbox project. My ConfigureServices method in Startup.cs looks like this:

This makes a single call the the AddMvc extension method, which looks like this:

This method first calls the AddMvcCore extension method that I looked at in part 1. Therefore all of the same setup and service registration occurs, including the creation of an ApplicationPartManager. The return type from AddMvcCore is an MvcCoreBuilder which AddMvc holds in a variable. As I noted in part 1, this provides access to the the IServiceCollection and ApplicationPartManager generated by AddMvcCore.

AddMvc calls an extension called AddApiExplorer on the IMvcCoreBuilder. This simply adds two service registrations – ApiDescriptionGroupCollectionProvider and DefaultApiDescriptionProvider – to the builder’s IServiceCollection.

Next it calls the AddAuthorization extension on IMvcCoreBuilder. The relevant methods are:

First it calls down to an extension, AddAuthorization in the Microsoft.AspNetCore.Authorization assembly. I won’t go in depth about that in this post, but this adds the core services to enable authorization. After this, AuthorizationApplicationModelProvider is added to the ServicesCollection.

Next, back in AddMvc it uses a private static method AddDefaultFrameworkParts which adds TagHelpers and Razor AssemblyParts.

First this method looks up the Assembly containing the InputTagHelper class which is Microsoft.AspNetCore.Mvc.TagHelpers. It then checks if the ApplicationPartManager.ApplicationParts list already contains a matching assembly. If not, it adds it. This is then repeated for Razor, using the assembly containing UrlResolutionTagHelper which is Microsoft.AspNetCore.Mvc.Razor. This also gets added to ApplicationParts if it is not already there.

Back in AddMvc again, a number of items are added to the ServiceCollection via extension methods. AddMvc calls AddFormatterMappings which registers the FormatFilter class.

Next it calls AddViews which does a few things. First it adds services for DataAnnotations using the AddDataAnnotations extension. Then it adds the ViewComponentFeatureProvider to the ApplicationPartManager.FeatureProviders. Finally it registers a number of view based services, mostly as singletons. I won’t show the whole class here as it’s quite big. The key parts are:

Next, AddMvc calls the AddRazorViewEngine extension method which looks like this:

This calls AddViews again which made me a little curious. It seems to be redundant to call AddViews prior to AddRazorViewEngine, when we can see that AddRazorViewEngine will do that for us and register those services anyway. It’s a safe operation, since all of these extension methods use the TryAdd… style methods. Those prevent duplicate registrations of implementations of services. Despite the safety of this approach, it was still strange to see this. To satisfy my curiosity I opened an issue on the Mvc GitHub repo, to check if this was an issue or actually by design. I got a very quick response from Ryan Nowak at Microsoft confirming that it was a design decision to make the dependencies a little more explicit and easy to see. Thanks Ryan!

AddRazorViewEngine adds three more feature providers to the ApplicationPartManager.FeatureProviders – TagHelperFeatureProvider, MetadataReferenceFeatureProvider and ViewsFeatureProvider. It concludes by registering the services needed for razor views to function.

AddMvc then uses another extension method to add services for the Cache Tag Helper feature. This is a very simple extension method which just registers 5 required services. Then back in AddMvc, AddDataAnnotations is called again. AddViews had already added this previously. This is for the same design decision that I discussed before.

AddMvc then calls the AddJsonFormatters extension method which adds a couple of items to the ServicesCollection.

The final extension method that gets called is AddCors which uses the extension method in the Microsoft.AspNetCore.Cors to add Cors related services.

With the service registrations completed, AddMvc creates a new MvcBuilder, passing in the current ServicesCollection and the ApplicationPartManager which are stored into its properties. Much like the MvcCoreBuilder we looked at in the first post and which we started with at the beginning of AddMvc, the MvcBuilder is described as allowing fine grained configurations of MVC services.

At the end of AddMvc I had 148 registered services in the services collection. 86 additional services are registered above and beyond those that AddMvcCore has added. The ApplicationPartManager had 3 ApplicationParts and 5 FeatureProviders stored in its lists.

Summary

That concludes my dissection of AddMvc. A little less exciting I’m afraid than the AddMvcCore method which laid more groundwork by creating the ApplicationPartManager. Mostly we call extensions to register extra services. As you can see, many of those services are more specific to web applications which need to work with views. For API only applications, it’s possible you won’t need most of these. In the API projects I’ve been working with, we start with AddMvcCore and simply add in the few extra items we need using the builder extension methods. Here is an example of what that looks like in practice:

In the next post I will look at what happens when we call UseMvc in the Startup.cs Configure method. That get’s a little more exciting!

Other Posts In This Series

ASP.NET Core MVC Anatomy (Part 1) – AddMvcCore Dissecting and understanding the internals of ASP.NET Core MVC

Welcome to part 1 of a new series where I will dissect the internals of the MVC source code. I am hoping to learn and demonstrate how it works deep (sometimes very deep) under the hood. In this series I will be studying the guts of MVC. If you’re squeamish, you might want to look away! Personally, I get a lot of value from reading, debugging, reading again, scratching my head, debugging again and eventually understanding (or at least thinking I understand) the source code of ASP.NET MVC. By learning how the framework behaves I can better utilise it and understand why things are happening within my own applications.

As I progress through the series, I will be making my best interpretation of what the code is doing based on what I read and observe. I can’t promise to understand and interpret everything 100% correctly but I will do my best! As I’ll be working my way through it in small chunks, I may learn new things that put earlier posts into more context. If that happens, I’ll try to update those earlier posts. It’s fairly hard to succinctly and coherently explain code in writing. I’ll try to keep the explanation clear and will be dropping in chunks of code from the MVC source to make it a little easier to follow. If it’s still not clear, you can (and in fact I recommend you do) spend time looking at the full code yourself, debugging where necessary. I hope this series will be of interest to some of the ASP.NET community who like me, enjoying understanding how things work. Do let me know your feedback so I know if I’m hitting the right notes.

AddMvcCore

In this post I’ll be dissecting what AddMvcCore is actually doing for us and focusing on classes such as ApplicationPartManager along the way. To do this I’m using the project.json based rel/1.1.2 source code, loading up the included MVC Sandbox project to begin my debugging. As new releases are made it’s possible that some of these classes and methods will change, especially the internal ones. Always refer to GitHub for the latest source! I’ve updated the MvcSandbox ConfigureServices as follows…

AddMvcCore is an extension method on the IServiceCollection. Using extension methods is the common pattern for registering groups of related services into the services collection. AddMvcCore is one of two extension methods which can be used to register the MVC services when building MVC application. AddMvcCore adds a more limited subset of the MVC services than the more commonly used AddMvc method. It’s useful when your application is not taking advantage of all the MVC features. In some simple applications this might be enough. For example, when building REST APIs, where I don’t need the Razor components, I often start with AddMvcCore. You can always add some extra services manually after AddMvcCore, or if you find yourself needing it, move to the more complete AddMvc method instead.

AddMvcCore looks like this:

One of the first things AddMvcCore does is to get an ApplicationPartManager via a static method GetApplicationPartManager which takes in the existing IServiceCollection. This method looks like this:

It first checks in the currently registered services to see if an ApplicationPartManager is already registered. By default this would not be the case. Although unlikely, it is possible that you or a prior service might have registered one before AddMvcCore is called. If an ApplicationPartManager doesn’t already exist, a new one is constructed.

Next the code will try to populate the ApplicationParts property on the ApplicationPartManager. To do this it first asks for an IHostingEnvironment from the services collection. If it can access a matching instance for IHostingEnvironment it uses it to get the application/assembly name. This then gets passed into the static DefaultAssemblyPartDiscoveryProvider.DiscoverAssemblyParts method which will return an IEnumerable<ApplicationPart>.

DiscoverAssemblyParts will first get the Assembly object and a DependencyContext for the supplied assembly name. In my debugging case this was “MvcSandbox”. It then passes these values into GetCandidateAssemblies which calls down into GetCandidateLibraries. Those methods look like this:

The comment above GetCandidateLibraries (stripped in code above) explains what it’s going to do

“Returns a list of libraries that references the assemblies in <see cref=”ReferenceAssemblies”/>. By default it returns all assemblies that reference any of the primary MVC assemblies while ignoring MVC assemblies.”

So to digest that, what we can expect is a list of any assemblies from our solution which reference any of the MVC assemblies.

ReferenceAssemblies is a static HashSet of strings defined at the top of the DefaultAssemblyPartDiscoveryProvider class. It contains the full assembly names for the 13 default MVC related assemblies.

GetCandidateLibraries uses a CandidateResolver class to locate and return the candidates. The CandidateResolver is constructed with the List of RuntimeLibrary objects and the HashSet containing the ReferenceAssemblies. Each of the RuntimeLibrary objects is iterated over and added to a dictionary. During the loop, a check is made to ensure that each depedency name is unique. If not, an exception will be thrown.

Each dependency (RuntimeLibrary) is each stored in the dictionary as a new Dependency object. This object includes a DependencyClassification property that is used to eventually filter out the required libraries (candidates). There are four DependencyClassifications defined in an enum.

When each Dependency is created, it’s initially marked as Unknown unless it is found in the ReferenceAssemblies HashSet. In that case it is instead marked as MvcReference.

The CandidateResolver.GetCandidates method is called which then begins traversing the dictionary of Dependencies. This method combined with ComputeClassification walks the dependency tree. For each Dependency it checks through the child dependencies, breaking out of the loop if any of them is already marked as a Candidate or MvcReference. At this point it will also mark the parent Dependency as a Candidate type. When this process is complete an  IEnumerable<RuntimeLibrary> is returned, containing any identified candidates. In my case, using the sample, it is just the MvcSandbox library which is marked as a Candidate.

DiscoverAssemblyParts converts any returned candidates into a new AssemblyPart. This object simply wraps the Assembly and mainly exposes some of the Assembly properties such as it’s name and types. I’ll probably explore that class in a separate post.

Finally, within GetApplicationPartManager the AssemblyParts are added to the ApplicationPartManager which is then returned. As a reminder, here is the relevant snippet of code that does that…

The returned instance of the ApplicationPartManager is added to the services collection as a Singleton back inside the AddMvcCore extension method.

Following so far? 🙂

The next thing AddMvcCore does is call a static ConfigureDefaultFeatureProviders method which takes in the ApplicationPartManager. This will add a new ControllerFeatureProvider to the FeatureProviders list if one is not already included.

The ControllerFeatureProvider will be used to discover controllers from the list of ApplicationPart instances. I’ll look at the ControllerFeatureProvider in a future post. For now, we’ll continue looking at the final steps inside AddMvcCore now that the ApplicationPartManager has been updated.

First a private ConfigureDefaultServices method is called, which calls the AddRouting extension method from Microsoft.AspNetCore.Routing. I won’t cover this here, but it’ll add all of the necessary services and setup to enable the routing functionality.

AddMvcCore then calls another private method – AddMvcCoreServices – This method registers each of the required core services for MVC to function. This includes things like the options framework, action discovery, action selection, controller factories, action invokers,  model binding and validation.

Finally, AddMvcCore constructs a new MvcCoreBuilder object with the services collection and the ApplicationPartManager as parameters. This class is documented as follows

“Allows fine grained configuration of essential MVC services.”

The MvcCoreBuilder is returned as the result of the AddMvcCore extension method. It exposes the IServiceCollection and ApplicationPartManager as properties. The McCoreBuilder and it’s extension methods are referenced in a few places to allow extra configuration to occur after the initial AddMvcCore call. In fact, the more complete AddMvc extension method calls AddMvcCore first and then uses the MvcCoreBuilder to perform the extra service configuration.

Summary

Explaining the flow above in a way that is reasonably clear is not easy. So, let me wrap up by summarising what we’ve witnessed. This is all very low level MVC code that ultimately adds MVCs required services to the IServiceCollection. Along the way it creates the ApplicationPartManager, which as we’ll look at later is used to start establishing the internal ApplicationModel that MVC uses. We haven’t seen very much direct functionality, but as I follow the startup flow we will get to the more interesting pieces now that the groundwork has been laid.

That completes my initial deep dive into the flow of AddMvcCore. In total it registers 46 additional items into the IServicesCollection. I’ll continue the journey in a future post where I’ll look at the further work that the AddMvc extension method performs.

Other Posts In This Series

Customising ASP.NET MVC Core Behaviour with an IApplicationModelConvention

I’m currently building a number of ASP.NET Core API applications which together form a new product we’re developing. As we defined the requirements, we came up with a few behaviours that we wanted to apply across the APIs by default. Two examples that I’ll use in this blog post are, applying a prefix to all of our routes and securing all of the endpoints by default.

In the case of the route prefix we needed this in order to support sharing a single AWS application load balancer between the APIs. We have three API microservices working together as part of our system, and each needed its own unique prefix so that the load balancer could direct the traffic to the correct service. We use attribute routing across our APIs and wanted a way to be able to apply a prefix, without hardcoding it onto every controller.

The second requirement from our security discussions, was that we wanted every endpoint to be locked down to the most restrictive authorization policy by default, even when no authorization attribute is applied. We prefer this hardened by default behaviour to avoid the possibility of unsecured access when changes are made. Using this approach we open up access on endpoints on a case-by-case basis.

After a bit of searching on the Internet, followed by a dive into the MVC source, I found a set of interfaces that allowed us to begin customising how MVC Core behaves. Enter the following interfaces…

  • IApplicationModelConvention
  • IControllerModelConvention
  • IActionModelConvention
  • IParameterModelConvention

What are Conventions?

Conventions exist throughout MVC and allow us to work with the framework without having to specify everything manually. Conventions are behaviours that MVC will apply by default and we can lean on these conventions in many cases. An example is the fact that when defining routes in Razor we can use the Controller name without the “Controller” suffix each time. MVC will handle that by convention.

However, as we develop more complex applications, we may want to define our own rules and conventions, or customise the existing behaviours. The four interfaces listed above provide a mechanism to allow us to do just that.

How Can We Create Them?

To start developing a new convention, you create a Class which inherits from one of these interfaces. When the MVC application is loading, we can register these new conventions, which will then be applied as MVC builds up a model in memory that represents the application. Each of these interfaces gets more specific in terms what they target and they are applied in this order as well. More specific conventions will override less specific conventions set higher up in the application model.

In this post I’ll demonstrate creating a couple of conventions. In order to keep everything easy to understand I have reduced the complexity of the conventions themselves as I want to focus on the mechanics of how we create them.

Defining the API

For this example I’ve put together a very basic single Controller API that will allow me to demonstrate the conventions. Here is what the Controller looks like.

Route Prefix Convention

As I mentioned above, in our APIs we use attribute routing to define the routes for all of our endpoints. I prefer this to the route builder approach as it makes documenting the API more explicit in my view. It’s easier for someone working with the code to see the paths that we are using, and also means we can use tools like Swashbuckle to build a swagger interface over our endpoints.

As I also stated, we host our APIs behind an AWS application load balancer. In order to share a load balancer, we need the traffic for each API to come in from a distinct path under a single shared domain. This allows us to use the AWS load balancer path routing to direct the traffic into the correct API application. Due to the way we engineer things, we wanted to be able to potentially update this path in the future, so we didn’t want it to have to be defined as part of the route attributes.

In our case a solution we came up with was to use a convention that would inspect the routes applied to each Controller, updating them at startup, so that the correct prefix was applied.

For this convention we will inherit from IControllerModelConvention which will allow us to work with the Controller and below. Since the attributes are defined on the Controllers, this is perfect for our needs. The convention I ended up using looks like this…

This is a simplified version of the convention we used in our application but it’s fine for demonstrating the main principles. The first thing to highlight here is the AttributeRouteModel I’m defining at the top. An AttributeRouteModel allows us to programmatically define the properties that make up a RouteAttribute that gets applied on Controllers and Actions. I’m defining one here that contains the template prefix I want to use. In a real world application, I define this template in our configuration and pass it into the RoutePrefixConvention on construction. But to keep this blog post focused I skipped those elements and hard code it as part of the Class.

The IControllerModelConvention interface defines a single void method called Apply. We use this to apply our behaviours to the Controllers in our application. The method accepts a ControllerModel parameter which gives us access to properties on each controller as the application starts up.

Now that we have the ControllerModel we use a collection on it called Selectors, which allows us to inspect the AttributeRouteModel applied to the controller. In the case where we find an AttributeRouteModel, it means the controller already has a Route applied to it. In this case we want to prepend the route defined on the Controller attribute with our prefix. We do this using the static CombineAttributeRouteModel method, which takes two AttributeRouteModels and combines them into a new model. In our example, I have used the prefix first and then applied the value that was defined explicitly on the controller. In the case of our HomeController this means that MVC now understands this to take the template “Prefix/Home” in its route definition. Where we do not find an existing AttributeRouteModel we can simply set the value of this on the selector to our defined prefix. Therefore any Controller without a route attribute will be considered to operate under the “Prefix” path.

Now when our application starts up, it gets configured using our required prefix across all controllers. In our case this means we can rely on the fact that our endpoints will all operate under a single prefix which our load balancer can use to route the traffic. Developers do not need to manually include the prefix value in the controller route and we avoid the risk that they may forget to do so. We also only need to maintain the prefix value in a single place, rather than having to update each controller if this prefix were to change in the future.

Authorization By Default Convention

Another example of a convention we wanted to apply in our APIs was that by default they would be secured to the highest level even when no AuthorizeAttribute is applied. There are various ways this might be achieved but we decided that a convention worked nicely for our requirements. By having actions secured by default it means there is no risk of a new Action being added that could inadvertently be left unsecured if the developer forgets to include an AuthorizeAttribute.

A simplified example of how we achieved this is as follows:

In this convention, since we want to work with Actions we have inherited from the IActionModelConvention interface. Every action in our application runs through this convention, applying the new convention as necessary.

Again we have an Apply method, this time accepting an ActionModel, representing the Action. First we check if we need to apply our convention to the Action. In our case, we only want to apply the AuthorizationPolicy if the Action does not have one specified or has not been explcitly marked as allowing anonymous access. In cases where these attributes have been set we assume the developer has controlled the access as required. Our convention only applies to Actions if they are missing any specific Authorize or AllowAnonymous attributes.

If we do find an action missing the attributes we create an AuthorizationPolicy and apply it with an AuthorizationFilter to the Action. Again, this example is over simplified in that our actual code defines many policies during startup, and in our case we pass in a restrictive policy into the Convention using it’s constructor. The code needed to do that is outside of what I wanted to demonstrate in this post.

For my example HomeController, the Index action does not specify any Authorization values. So by default this will now be secured by convention with a require authenticated user policy. If we access it without being authenticated we receive a 401 result back. The OpenEndpoint Action however will allow unauthenticated users since we have explicitly defined it to AllowAnonymous.

Wiring It Up

The final piece of work is to include our custom conventions so that MVC knows about them and can use them when it builds up its application model. We do this by adding the conventions to a Conventions collection on the MVC options during service registration. Here is the ConfigureServices method that I’ve used in this demo.

On the AddMvcCore extension we can work with the MvcOptions object. We simple call Add on the Conventions collection to register our new conventions.

In Summary

The convention interfaces provide a powerful and simple way to begin defining bespoke behaviour on your applications. They provide a nice single place for customisation to occur and are quite simple to work with. One consideration to keep in mind is that they might make your application confusing to new developers who expect it to work using the MVC default conventions. Developers need to be aware that custom conventions are being applied and how those may override the default behaviour of your application.

Migrating from .NET Framework to .NET Core The journey of re-targeting an ASP.NET Core application onto .NET Core

What better way to start the new year than with some coding? In my case I’d found a bit of time during the end of my Christmas vacation to tackle an issue on allReady I’d been wanting to work on for a while. Before getting into the issue, if you want to read more about what allReady is you can read my earlier blog post where I cover contributing to it in greater detail.

allReady was first created during the early beta’s of ASP.NET Core (at which point it was known as ASP.NET 5). At that time, due to some dependencies which did not yet support .NET Core, the application was built on top of the full .NET framework 4.6.

The first thing to discuss briefly is why and how we can run ASP.NET Core on the full traditional .NET framework. While it might be reasonable to assume that ASP.NET Core naturally needs to run on the new .NET Core framework, remember that .NET Core ultimately includes a subset of the larger full .NET framework API. Because of this, it is perfectly possible to run ASP.NET Core on the original 4.x version of the .NET framework. This is an advantage to those that want to make use of the new features and optimizations within ASP.NET Core, while also still utilising the libraries they know and love. This is a perfectly reasonable way to run ASP.NET Core but with an important limitation. By targeting the .NET framework you are not able to develop or host the ASP.NET application outside of Windows.

For the allReady project this had started to become a bit of a limitation. We had Mac users who were keen to contribute to the project, but were unable to do so without installing and running a Windows VM on their device. As we were getting to the end of some complex requirements for allReady’s v1 milestone, I took the opportunity to spend some time looking at retargeting the application to run on .NET Core. This turned out to be a bit more involved that I had first anticipated. However, we have succeeded and the code base has been proven to build and run on both Mac and Linux devices.

Updating the Solution

The first part of the process was to update the project.json file inside the web solution. The main update here was changing the framework specification to netcoreapp from net46.

project.json before:

project.json after:

We had decided as a team to continue to track the .NET Core 1.0.x LTS support stream. In short, this is a stable version of the framework, under full Microsoft support. New minor patch releases are appearing to fix major bugs or security issues, but otherwise the changes are expected to be quite limited. Our project was already targeting the latest 1.0.3 SDK and latest 1.0.x libraries for each of the components.

You’ll see above that as well as changing the framework moniker I’ve added an imports section as well. This was required in our case as we have some dependencies on some libraries which don’t yet specify a dotnet Target Moniker or .NET Standard version. It’s essentially there for backwards compatibility until the ecosystem settles down.

Handling Dependencies

Upon making the required changes to project.json, it became apparent that some of the packages we depend on could no longer be restored. The first thing I did was to use Nuget Package Manager to update all of the dependencies to their latest versions to see if any of those had been updated to support .NET Core. This helped in some of the cases but still left me with a few that were not compatible. For the time being I commented those out as well as commenting out any code using those dependencies. In our case this included the following main libraries we had issues with…

  • XZing which was being used in one place to generate a QR code image for use within the allReady phone application. This had its own dependency on System.Drawing which is not part of .NET Core.
  • LinqToTwitter which was being used in one area of the application to query for a user’s information after a sign in via Twitter.
  • GeoCoding.net which was used in a few places in order to take an address and geo code it to latitude and longitude coordinates.

With those taken care of for now (if only commenting out code was considered “fixing”), the last remaining issue for the web project was a dependency on a separate library project in our solution which contains some shared models. This library is used by both our web application and also in some Azure web jobs we have defined. As a result it needed to support both .NET Core and the .NET Framework. The solution here was to convert it to a .NET standard library. In my case I chose the lowest standard version I could find which gave me the API surface I needed, while supporting my dependent projects. This was the .NET Standard 1.3 version. The project.json for this library was changed as follows.

Before:

After:

With this done, I also had to update our test library project.json in a similar way in order to support .NET Core by changing the frameworks section and updating to the latest versions of our dependencies as with the main web project. The changes in this file were very similar to what I’ve already shown, so I won’t repeat the code here.

Wrapping missing dependencies

With the package restore now succeeding for the web project, test project and our .NET standard project my next goal was to find solutions to the libraries we could no longer use.

XZing was an easy decision as currently the phone application project is a bit stale and it was decided that we could simply remove the QRCode endpoint and this dependency as a result.

When I reviewed the use of LinqToTwitter it was only really abstracting one API call to Twitter behind a more fluent Linq syntax. We were using it to request a Twitter user’s profile information. The option I chose here was to write our own small service to wrap the call to the Twitter REST API which using HttpClient. There are some hoops to jump through in order to establish the authentication with Twitter but this ultimately didn’t prove too difficult. I won’t go into the exact details here since this post is intended to cover the general “migration” process.

The final dependent library that wasn’t able to support .NET Core was GeoCoding.net. We were using this library to perform address to coordinate lookups via Google. On review of the usages, it was again clear that we had a whole dependency for what amounted to a single call to a Google endpoint. We were already calling another part of the Google REST API directly from a custom service class, so again, the decision was to wrap up the geocoding functionality in our own code, calling the API directly. Again, the exact details are beyond my scope here, but I may cover this in a future blog post if there’s interest.

Handling API Differences

With the problem packages removed I tried to compile the solution. I was immediately hit by a wall of errors. This was initially a little daunting, but after looking through them I could see that many were quite similar and due to changes to the .NET Core API surface.

Reflection

The first of which was where we’d been using reflection. We have a couple of places in the web app that rely on reflection for wiring up our IoC container and also in some of the test classes. In .NET Core the Object.GetType() method no longer returns full type detail information, instead it returns just the type name. In order to access the full type information there is a new extension method called GetTypeInfo(). It returns TypeInfo which is similar to what Type used to return.

Before:

After:

In this example you can see that the only change is adding in the GetTypeInfo call.

Date Formatting

We also had some code calling ToLongDateString on dates. This is no longer avaiable so I switched to formatting the date using the ToString method instead.

Before:

After:

Exceptions

In a couple of places we were throwing ApplicationException which also seems to have been removed. I switched to a general Exception in this case.

Before:

After:

Final Tweaks

Once all of the compile errors were fixed we were nearly there. In order to enable the site to be run under the non IIS mode of Kestrel I updated the profiles in launchSettings.json

Before:

After:

This new profile basically tell VS to run the project (i.e. via dotnet.exe) – launching it on port 5000. Within Visual Studio you can choose whether to start via IIS Express or directly into the project via the dotnet.exe. New projects targeting .NET core include these two profiles automatically, so updating it in our project felt sensible, to make the developer experience familiar.

Conclusion

With the above steps completed the project was now compiling, tests were running and the site worked as expected. Quite a few hours of work went into the process. The longest time was spent building the new service wrappers around the 3rd party REST APIs so that I could get rid of incompatible dependencies. The actual project and code changes weren’t too painful and it was just a case of working out the API changes so I knew how to modify our code.

It was an interesting experience and once we had it tested on a Mac and Linux during the NDC London code-a-thon it was very exciting to see it working. This has made it much easier for non-Windows developers to contribute to the allReady project and is just one of the great new benefits of working with .NET Core and ASP.NET Core.

A Reminder to Take Care when Registering Dependencies

I looked into a “fun” little problem yesterday where we were seeing occasional errors in some of our ASP.NET Core code which calls down into the ASP.NET Core Identity UserManager. We were getting a range of NullReferenceException and ObjectDisposedExceptions as well as various exceptions from Npgsql.NpgsqlConnection (we use Postgres rather than SQL in this project) stating things such as “Connection already open”.

The issue was presenting itself within some authentication and authorisation code we have which wraps and extends the ASP.NET Core Identity UserManager and SignInManager functionality. We use ASP.NET Identity over a PostGres database for our user store but include some application specific functionality with our own code. We have our own AuthenticationManager and UserManager classes, both of which take dependencies on the underlying Microsoft.AspNetCore.Identity classes.

These originally got registered in in the ConfigureServices method of the Startup.cs class as follows:

services.AddSingleton<IAuthenticationManager, AuthenticationManager>();
services.AddSingleton<IUserManager, UserManager>();

The constructor for our AuthenticationManager looks a bit like this:

public AuthenticationManager(UserManager<ApplicationUser> userManager, SignInManager<ApplicationUser> signInManager)
{
   // setup our authentication manager here
}

Do you see the problem?

The issue here is the singleton registration. While our classes themselves have no state and could be shared between requests, the dependencies on Microsoft.AspNetCore.Identity.UserManager<T> and Microsoft.AspNetCore.Identity.SignInManager<T> have to be considered.

If we take a look at where these are registered within the Microsoft.AspNetCore.Identity source we can see the following:

services.TryAddScoped<UserManager<TUser>, UserManager<TUser>>();
services.TryAddScoped<SignInManager<TUser>, SignInManager<TUser>>();

They are added using the scoped lifetime which means they expect to be created once per request. They themselves depend on an IUserStore which is registered with the scoped lifetime as well.

As a result, the singleton registration of our AuthenticationManager was trying to hang onto dependencies to objects for the entire application lifetime, where those dependencies only expected to live for the request scope. Sometimes they seemed to get a different database context and hence the “Connection already open” errors we saw. Sometimes the dependencies had been disposed of by the time they got called by our code and as such we saw the various exceptions being thrown. In some cases we seemed to still have access to the UserManager but the context underneath was null. I won’t dive into this tool deeply, but it was apparent that we should really make the registration of our classes scoped as well. This way, they are created once per request, the same as their dependencies. We changed the registration code to the following:

services.AddScoped<IAuthenticationManager, AuthenticationManager>();
services.AddScoped<IUserManager, UserManager>();

This resolved the errors we were seeing immediately. It was an annoying oversight although fortunately it was fairly easy to guess at the cause. The various errors all suggested that we having some issues with the lifetime of the  dependencies. This case has been a reminder to carefully consider the lifetimes of service registrations as it can produce some unexpected errors and behaviour.

The Microsoft documentation even includes a large warning about scoped services – “The main danger to be wary of is resolving a Scoped service from a singleton. It’s likely in such a case that the service will have incorrect state when processing subsequent requests.” – Oh how true this is!

Happy dependency injecting!

Updating an ASP.NET Core Site to the December 2016 Release How to upgrade a site on the LTS 1.0.3 version of ASP.NET Core

I run into an issue this week during what should have been a simple ASP.NET Core application update. I wanted to share my experience in case others run into similar problems. Also, I’m sure to be back here myself to remember this in the future!

On December 13th Microsoft released their second minor patch release for the LTS (Long Term Support) track of .NET Core. ASP.NET Core releases on two tracks depending on how cutting edge you want to be. LTS is the “safer” track, which will be supported and bug fixed during the support lifespan. The other track is FTS (Fast Track Support) which will be where new features appear. You can read more about this on the Microsoft Blog.

As you may be aware from reading my other posts, I’m contributing to an opensource charity project called allReady. We’re currently using the LTS track packages and at the time of writing still targeting the full .NET framework (as opposed to .NET Core). We had applied the last patch release 1.0.1 packages in September without any major problems so I was hoping for the same experience with this patch release.

The details for the release were made available in this Microsoft blog post. If you follow the links to the release notes you will see that the ASP.NET Core updates are considered version 1.0.3. This is where the versioning starts to get a little murky in my opinion. ASP.NET Core itself has a version number (now 1.0.3) which tracks general “releases” of the framework. However, the individual packages that actually make up .NET Core and ASP.NET Core also have version numbers and revisions. Those numbers don’t track with the main release version, so it starts to get a bit confusing. You won’t for example find a package for Microsoft.AspNetCore.Mvc at version 1.0.3. The latest for that package is 1.0.2.

I’ll now step through how I upgraded our project and then discuss the issue I experienced with the EF commands for entity framework. Before starting to update the project I made sure to install the latest version of the 1.0.3 SDK from the Microsoft website.

Update Package.json

This is where the first pain point came for me. It wasn’t listed specifically in the blogs posts or release notes all of the package which had updated and what the latest package versions were. So my initial solution was to turn to the VS Nuget Package Manager where I was hoping I could simply update all of the Microsoft packages to the latest versions. However, since the package manager lists the latest (non pre-release) versions, it was offering me the FTS 1.1.x versions. So a simple, upgrade all option was out of the question.

Next I went into the project.json manually planning to update each package by hand, allowing autocomplete to give me the latest versions. However autocomplete didn’t always seem to pick up the latest version number for me automatically and I was worried about missing something. So I reverted back to the Nuget Package Manager and went one by one through the Microsoft packages. I used the install dropdown to select the newest LTS version 1.0.x for each one. This was slow and manual but at least meant I knew what options I had and could be explicit in choosing the latest version i wanted.

Here’s a rundown the packages from our project.json that I needed to update and the versions number they are are now on (which should be the latest LTS release). Note that our project.json may well differ for newly generated projects so you may not have all of these packages and you may even have dependencies listed that we do not.

"Microsoft.EntityFrameworkCore.SqlServer": "1.0.2",
"Microsoft.EntityFrameworkCore": "1.0.2",
"Microsoft.ApplicationInsights.AspNetCore": "1.0.2",
"Microsoft.AspNetCore.Mvc": "1.0.2",
"Microsoft.AspNetCore.Mvc.TagHelpers": "1.0.2",
"Microsoft.AspNetCore.Authentication.Cookies": "1.0.1",
"Microsoft.AspNetCore.Authentication.Facebook": "1.0.1",
"Microsoft.AspNetCore.Authentication.Google": "1.0.1",
"Microsoft.AspNetCore.Authentication.MicrosoftAccount": "1.0.1",
"Microsoft.AspNetCore.Authentication.Twitter": "1.0.1",
"Microsoft.AspNetCore.Diagnostics": "1.0.1",
"Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.0.1",
"Microsoft.AspNetCore.Server.IISIntegration": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.2",
"Microsoft.AspNetCore.StaticFiles": "1.0.1",
"Microsoft.AspNetCore.Cors": "1.0.1",
"Microsoft.Extensions.Configuration.Abstractions": "1.0.1",
"Microsoft.Extensions.Configuration.Json": "1.0.1",
"Microsoft.Extensions.Configuration.UserSecrets": "1.0.1",
"Microsoft.Extensions.Logging": "1.0.1",
"Microsoft.Extensions.Logging.Console": "1.0.1",
"Microsoft.Extensions.DependencyInjection.Abstractions": "1.0.1",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.1",
"Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.1",
"Microsoft.AspNetCore.Mvc.WebApiCompatShim": "1.0.2",
"Microsoft.Extensions.Logging.Debug": "1.0.1",

I also had to update the dependencies for some of these in our test library, so remember to check there too.

To complete the upgrade I also adjusted the SDK version in our solution’s global.json as follows…

"sdk": {
  "version": "1.0.0-preview2-003156",
  "runtime": "clr",
  "architecture": "x86"
},

With the above changes made I was able to build and run our site via Visual Studio. Great!

It wasn’t until a couple of days later when I hit an issue. During an issue I was working on I’d updated our model classes Entity Framework and needed to build my next entity framework migration. I did this by running the usual command…

dotnet ef migrations add AddNotifications

After building the project I was faced with the following error:

Could not load file or assembly 'Microsoft.EntityFrameworkCore, Version=1.0.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

At this stage I tried a few things, none of which fixed the problem outright, although they may have contributed to the overall solution. Firstly I cleaned my solution and rebuilt, no joy. Then I wondered if Nuget had cached any incorrect versions of the package, so I cleared my local Nuget cache and tried restoring my project dependencies again. Still no joy! Finally I hopped onto the ASP.NET Core Slack channel and sought help there. It was with huge thanks to Chad Tolkien then he suggested a manual deletion of my bin and obj folders within the project. I did that and rebuilt the solution. Success! Finally I was able to generate a migration using the EF CLI tooling. So it seems the clean and restore steps previously hadn’t cleaned everything they needed to.

I’d love to know if there’s a better way to manage these updates currently? I’m hoping that with the final tooling release and VS 2017 things will get easier. It would be useful for example, to be able to choose which track you want to use within Nuget Package Manager. I’m not sure how that would be achieved exactly but it would distinguish the packages you really want to get the latest within your chosen support track. It would also be handy if Microsoft blog posts about each release include specific details of each updated package and it’s latest version number. Having a quick reference when updating dependencies would have made my life a little easier. There are some release notes which hint at the main packages and their new version number, but it didn’t include all components that I ended up changing.

Custom ModelBinding in ASP.NET MVC Core How to HTML Encode Strings

In a previous post I showed how we could automatically HTML encode data when deserialising it from a JSON request body. In my case this was to meet some specific security requirements we had for an ASP.NET Core API we were building. This time around I will discuss a similar requirement which stated that we should also ensure we HTML encode any strings bound from the route, querystring and form data. We’ll do this by creating a custom ModelBinder.

Much like our earlier requirement, we needed to ensure that we are not storing un-encoded data containing HTML or script tags in our database. While our application escapes the data on the way out, we want to prevent anyone accidentally rendering un-escaped HTML in future applications. We have no requirement to accept HTML code on our API so we decided to ensure each value was encoded by default during binding. As before we wanted to enable this globally so that no developer would have to take specific steps to enable this per controller or action.

Model Binding Flow

Before I jump into the solution, I’ll firstly explain at a high level how the model binding flow works in ASP.NET Core MVC. To understand this better I ended up following steps from this blog post where I discuss how we can add the MVC source solution to our code, allowing us to debug into it. With this in place I was able to step through the model binding code to watch and understand what was happening.

Model binding in principle is quite straight forward. It attempts to match up values coming in on the request to any properties expected by the parameters of the controller and action. Each value is run through the model binding flow which looks to find a suitable binder to handle the value. To find a suitable binder, ASP.NET MVC Core uses binder providers. These providers are registered when MVC is initialised and by default includes 14 different providers. Each of these implements the IModelBinderProvider interface.

There is a provider to handle key value pairs for example, another for complex types and another for simple types. These binders are registered in a specific order and MVC checks each provider in that order during the binding process until it find the first provider which can provide a suitable binder for the object being bound. Each binding provider will have some conditions that check the binding provider context. Once these conditions are met, the provider will return a binder that can handle the binding.

Once we have a model binder available it’s BindModelAsync method is called. This method expects to handle the value of the object being bound and returns a ModelBindingResult. If the binding succeeds as expected then a Success is returned, including the final value to be bound to a property. I hope to spend more time exploring the binding process in a future post. It’s a little beyond the scope here to explain the deeper details of how everything is hooked up. For now, let’s look at how we meet the requirement above to url encode values during the binding process.

Let’s assume we have the following action.

We want to ensure that the newTitle property is HTML encoded by the time we can access it within the action.

Creating a ModelBinder

The starting point for our solution was to create a model binder based on the IModelBinder interface.

We initialize the binder passing in an IModelBinder as a fallback. In my case I am specifically expect to handle string values but I don’t want to worry about the cases for handling null or empty strings. That is already covered in the default SimpleTypeModelBinder provided in MVC. Therefore I chose to pass in a binder which I will pass the responsibility onto in those cases.

Within the BindModelAsync method we first call the the ValueProvider.GetValue method on the value provider in the bindingContext. We pass in the model name which returns a valueProviderResult. As long as there is a value in the result we first check if it’s null or empty. If so, this is where we call the fallback binder’s BindModelAsync method. If we do have a suitable value then we proceed to HTMLEncode it before creating a success ModelBindingResult. Finally we return a completed task using the internal helper TaskCache.CompletedTask.

Creating a ModelBinderProvider

Now that we have a model binder defined we need to create a provider which will determine if the modelbinder is suited to the object being bound. Here is the code…

This provider is pretty simple. We must implement the GetBinder method on the IModelBinderProvider interface. We use the MetaData on the ModelBinderProviderContext to determine if it can provide a suitable binder. This meta data includes some helper properties such as the IsComplexType flag to help us determine if we can provide binding for the object. In this case we are looking only for strings. If the object being bound is a string, then we can return our custom HtmlEncodeModelBinder. Otherwise we return null. When we return null the next binding provider will be given the chance to provide a binder. This continues until a suitable binder is found. You’ll notice that we pass in a new SimpleTypeModelBinder which will act as our fallback for any string null or empty cases we encounter during binding.

Now that we have a binder and a provider, the final step is to add this provider to the list of binding providers. Since these are executed in a set order we also need to place our provider in the right place. I’ve achieved this using the following extension to the MVC options.

What we are doing here is using LINQ to find the SimpleTypeModelBinderProvider in the list of ModelBinderProviders. Our binder provider need to run just before the simple type binder to ensure that we can have the opportunity to handle the string types with our HTML encoding logic. If we placed it after the SimpleTypeModelBinderProvider we would find that the binding flow never reached our code as the binding would already have been handled. We then get the index of the SimpleTypeModelBinderProvider and using that index we insert our binder provider into the list at that position. Now, when MVC binding occurs, our binder provider will be part of the binding process. If we inspect the ModelBinderProviders during debug it should now look like this:

ASP.NET Core ModelBinders List

You can see our new custom HtmlEncodeModelBinderProvider listed before the SimpleTypeModelBinderProvider.

Finally, with everything complete we can do the final wiring up. We call our options extension when adding MVC within the ConfigureServices method in Startup.cs

Now when the newTitle parameter is bound on our example action it will ensure that any html tags are safely encoded. As with all of my posts, I’ve taken the best approach I could think of to implement this. I welcome any comments and suggestions to improve this code or correct any mistakes!

R.I.P project.json – Out with the new, in with the old An early look at the VS 2017 csproj changes

Okay; the title of the post is a bit tongue in cheek! Since it was first announced that the new project.json format (developed by the ASP.NET Core team) was going to be retired in favour of the more traditional csproj file, the .NET community have had some very strong opinions. These have been shared on Twitter, in blog posts, on GitHub and via any other channels people could find. I will admit that when I heard about the change, I initially had some of the same views and concerns. Why would Microsoft take away my beloved project.json? However, I decided to wait it out and see what Microsoft actually produced before passing judgement.

A little bit of history

Let me take a step back here and try to explain a little bit of the history as I have understood it. In the early beta timeframe for ASP.NET Core (then called ASP.NET 5) the ASP.NET and .NET teams were working independently of one another. The .NET team were working heavily on the API surface for .NET Core, whilst the ASP.NET team were working on the ASP.NET Core application platform on top of the API the .NET team were making. The tooling to work with ASP.NET Core was in an early preview and at the time we had DNVM and DNU commands. These have since been consolidated and morphed to form the dotnet CLI.

In order to support development of ASP.NET projects, the team decided to take the opportunity to develop a brand new project file format. One which would be simpler and address many of the issues the community had experienced when working with csproj based projects. At the time, nothing existed specifically for .NET Core projects and it was felt that since ASP.NET Core represented a new beginning for the platform, it was as good a time as any to make changes. During the betas and release candidates people started to get their hands on the new project.json and xproj based solutions when building their web applications. The response was indeed very positive. project.json provided autocomplete of dependencies, ease of reading and a much simpler overall structure.

Personally, I was very pleased when project.json was born. I could easily understand it and it became a regular reference point to review my dependencies and manage my projects.

However, it was during the release candidate period that decisions started to get made around a wider .NET project format for the other application types. Customers were starting to share their concerns around project.json and the fact that it was not supported within MSBuild. What would this mean for large monolith projects, how would they even begin to migrate them to .NET Core? And, being fair, these were valid concerns. Starting with greenfield ASP.NET Core projects was fantastic. There was no need to concern ourselves with the past and the new new project.json was easy to get to grips with. But, working with an existing project would not necessarily be a simple conversion without some degree of work. So the announcements started to be made that project.json would be retired in favour of a more backwards compatible MSBuild ready, csproj format. It was too late in the day for this work to be completed before the RTM of ASP.NET Core 1.0 but we were warned that it would be worked on for release alongside Visual Studio 2017.

And that brings us to today, with the announcement of the release candidate for Visual Studio 2017 we can now get our hands on projects using the new (or should I say old) csproj project format. I’ve taken an early peek at the file to see what has changed.

Comparing project.json (VS 2015) with csproj (VS 2017 RC)

Microsoft have assured us while working on the change that they expected to port many of the improvements that the project.json file gave developers over to the improved csproj format.

Comparing project.json and the new csproj

In the screenshot above (hard to see due to the size) I’ve opened a default new Web Application project in VS 2015 (left) and VS 2017 (right). Side-by-side the obvious difference in that project.json is JSON and the csproj is XML. You will notice that the line length of the files is not too different. project.json comes in at 65 lines and csproj comes in at 77 lines. Already that is a significant reduction in lines from a traditional csproj file.

Here are the complete files:

The other thing which stands out to me is the readability. Despite the work Microsoft have done to reduce noise in the csproj file, I still find it much harder to scan than the project.json. The XML tags draw my eye away from the detail that I actually want to consume.

At the time of writing I’m still very new to working inside VS 2017 and indeed on my computer it’s proofing quite buggy. For example, a solution that I created inside VS 2017 and which had been working fine (1 web application and 1 test class library) no longer re-opens after I saved and closed it. Suffice to say, I won’t be using Visual Studio 2017 for production work at this time and may even remove it entirely as it seems to be affecting my experience inside VS 2015 too.

Once nice advantage of the csproj file over the project.json is that we can now include references to full (non-core) .NET projects as well. Previously the integration was not available.

Something I do miss however (unless it’s just not working for me) is proper autocomplete of the packages and versions. With the project.json I could start typing the name of a dependency and Visual Studio would show me potential autocomplete options. I could also choose from the available versions. This seems to be absent in the csproj file. Personally I only saw basic intellisense for some, but not all, of the xml tags.

Migrating from project.json

Migration from project.json can be achieved in one of two ways. These easiest is to open the existing xproj/project.json project inside Visual Studio 2017. The IDE will detect the project format and prompt for a migration. The other option is to run the dotnet migrate command directly in the command line. This should result in the same conversion.

Migration from project.json

In a quick test for me, the VS migration seemed to work quite well, although that was a single default web application project with basic dependencies. In theory it should work just as well for larger projects. Perhaps when we come to migrate the allReady project which I contribute to, we’ll see if there’s more to it.

Having done this with the project.json solution created in VS 2015 (as appeared above) we get the following csproj output.

Inside the project.json we have our main dependencies section and inside that we define the package and version. This is pretty clear and easy to read. Inside the csproj each package reference is added inside an ItemGroup element. It’s more verbose than the project.json.

Much of the content is similar and when you compare the files you can generally find where each part has been migrated. However you’ll need to remember the more complex XML schema in order to edit the file manually. Indeed, Microsoft seem to be pushing us away from this in favour of IDE tooling and the command line. I fear though, that those are never going to be as quick to work with as we’ve been used to with a quick edit of the project.json file.

Differences between the original csproj and the new csproj

There are some other notable changes between what we see in a traditional csproj file when compared to the newer csproj file. Firstly they have gotten rid of the use of GUIDS within the file, so that it is much more human readable. That’s a good change and one I’m happy to see.

We are no longer required to include all of the files within the csproj file in order for them to be considered part of the project. project.json gave us a much more favourable, include by default, behaviour and this has now been made available inside csproj. It’s ever so slightly different as we must use a wildcard style include line to tell the project to behave this way. New projects created inside VS 2017 have this by default. When working with teams of developers, the csproj file was a common area of merge conflicts. We should no longer have so many issues since the project file will not change simply because we’ve added new files to the project.

Package references are now integrated into csproj so this single file also includes any libraries our project is dependent on and any projects we reference. It’s nice to have everything in one place as we’ve become used to with the project.json format.

Another important change is that we can now edit the csproj file whilst the project is loaded. This was not previously possible and meant that any manual changes were slow to make as we had to first unload the project. Granted, it was rare that I wanted to manually edit the file in the full .NET framework days. To edit the csproj file we can simply right click on the project without unloading it first.

Edit csproj inside VS 2017

One gripe at this point that I experienced is a warning for inconsistent line endings every time I edit the csproj file.

Inconsistent line endings

Other project differences in VS 2017

As well as the direct differences inside the file, there’s a few initial tooling differences I wanted to highlight between VS 2015 and VS 2017 that I’ve noticed.

The main one I’ve seen so far is the consolidated dependencies folder. Before, we had a “References” folder which included any dependent libraries we added via Nuget and any project references. We then had a “Dependencies” folder which included Bower dependencies. Inside Visual Studio 2017 these have been placed together inside a single “Dependencies” container. Project references have their own sub-container to separate them from the Nuget dependencies.

dependenciesfoldervs2017_1

dependenciesfoldervs2017_2

Initial Impressions

I definitely don’t find the XML as readable as the JSON for reviewing the configuration of my project. This will improve with time as I get used to it, but there’s still too much noise in the file for my liking. It’s certainly a big improvement on the traditional csproj format though and the wildcard include of files is an important enhancement.

Personally, I still much prefer the project.json format and whilst I understand the business case the led Microsoft to their change of course, I feel it’s a shame the MSBuild system couldn’t have moved with the times, rather than dropping back to the xml based project file.

The lack of autocomplete for dependencies bothers me too. This was really handy and I could quickly build up my dependencies without having to open up the Nuget package manager each time. It feels like we’re now being forced down that route which proves to be a bit slower and less efficient.

It’s still very early days and I need to spend more time with the new version of csproj in a final stable version of VS 2017 before passing a final judgement. We also have to remember that the .NET SDK tooling is not quite fully baked yet either. At this early point I’m running into quite a few issues running VS 2017 so it’s hard to fully appreciate the final experience.

The Microsoft statements suggest that in reality the tooling and command line interface should negate the need to spend much, if any, time editing the csproj manually. Once everything is complete we will be in a better place to judge for ourselves. I can’t help but assume things will end up being slower overall and that I’ll end up missing the ease of the project.json file.

Update 23rd Nov 2016

After reviewing https://blogs.msdn.microsoft.com/dotnet/2016/10/19/net-core-tooling-in-visual-studio-15/ I can see that the format for the new csproj that I have experienced in VS 2017 RC doesn’t match the example of what I presume is the end goal. So it’s entirely possible that the structure will evolve even more before release. The example in the MSDN blog does look cleaner and much less cluttered so I will watch this space with interest.

Update 7th March 2017

Further reflection on the changes and the way it has been handled in my post – Visual Studio 2017 is Released – Reflecting on the loss of project.json and looking ahead to the new VS 2017 experience

Loading Pin on Bing Maps from ASP.NET Core MVC Data

As I’ve covered a few times previously in my blog I’m really enjoying working on the allReady project which is run by the Humanitarian Toolbox non-profit organisation. One of the great things about this project from a personal perspective is the chance to learn and develop my skills, whilst also contributing to a good cause.

Recently I picked up an issue which was not in my normal comfort zone, where the requirement was to load a Bing map showing a number of pins relating to request data coming from our MVC view model. I’m not very experienced with JavaScript and tend to avoid it whenever possible, but in this case I did need to use it to take data from my ASP.NET Core model to then populate the Bing maps SDK. As part of the requirement we needed to colour code the pins based on the status of the request.

My starting point was to have a look at the SDK documentation available at http://www.bing.com/api/maps/mapcontrol/isdk. After a bit of reading it looked possible to meet the requirement using the v8 SDK.

The first step we to update our Razor view page to include a div where we wanted to display the map. In our case I had decided to include a full width map at the bottom of the page so my containing div was as follows:

<div id="myMap" style="position:relative;width:100%;height:500px;"></div>

The next step was to include some JavaScript on the page to use the data from our view model to build up the pushpin locations to display on the map. Some of the existing map logic we have on the allReady project is stored inside a site.js file. This means we don’t need to include too much inline code on the page itself In my case the final code was as follows:

@section scripts {
    <script type='text/javascript'
            src='https://www.bing.com/api/maps/mapcontrol?callback=GetMap'
            async defer></script>
    <script type='text/javascript'>
        function GetMap() {
            renderRequestsMap("myMap", getLocationsFromModelRequests());
        };

        function getLocationsFromModelRequests() {
            var requestData = [];
            @foreach (var request in Model.Requests){
                @:var reqData = {lat:@request.Latitude, long:@request.Longitude, name:'@request.Name', color:'blue'};

                if (request.Status == RequestStatus.Completed)
                {
                    @:reqData.color = 'green';
                            }

                if (request.Status == RequestStatus.Canceled)
                {
                    @:reqData.color = 'red';
                }

                @:requestData.push(reqData);
                        }
            return requestData;
        }
    </script>
}

This renders two script blocks inside our master _layout page’s scripts section which is rendered at the bottom of the page body. The first script block simply brings in the Bing map code. The second block builds up the data to pass to our renderRequestsMap function in our site.js code (which we’ll look at later).

The main function here is my getLocationsFromModelRequests code which creates an empty array to hold our requestData. I loop over the requests in our MVC view model and first load the latitude and longitude information into a JavaScript reqData object. We set a name which will be displayed under the pin using the name of the request from our Model. We also set the color on this object to blue which will be our default colour for pending requests when we draw our map.

I then update the color property for our two other possible request statuses. Green for completed requests and red for cancelled requests. With the object now complete we push it into the requestData array. This forms the data object we need to pass into our renderRequestsMap function in our site.js.

The relevant portions of the site.js that result in producing a final map are as follows:

var BingMapKey = "ENTER_YOUR_KEY_HERE";

var renderRequestsMap = function(divIdForMap, requestData) {
    if (requestData) {
        var bingMap = createBingMap(divIdForMap);
        addRequestPins(bingMap, requestData);
    }
}

function createBingMap(divIdForMap) {
    return new Microsoft.Maps.Map(
        document.getElementById(divIdForMap), {
        credentials: BingMapKey
    });
}

function addRequestPins(bingMap, requestData) {
    var locations = [];
    $.each(requestData, function (index, data) {
        var location = new Microsoft.Maps.Location(data.lat, data.long);
        locations.push(location);
        var order = index + 1;
        var pin = new Microsoft.Maps.Pushpin(location, { title: data.name, color: data.color, text: order.toString() });
        bingMap.entities.push(pin);        
    });
    var rect = Microsoft.Maps.LocationRect.fromLocations(locations);
    bingMap.setView({ bounds: rect, padding: 80 });
}

The renderRequestsMap function takes in the id of the containing div for the map and then our requestData array we built up in our Razor view. First it calls a small helper function which creates a bing map object targeting our supplied div id. We then pass the map object and the request data into addRequestPins.

addRequestPins creates an array to hold the location data which we build up by looping over each item in our request data. We create a Microsoft.Maps.Location object using the latitude and longitude and add that to the array (we’ll use this later). We then create a Microsoft.Maps.Pushpin which takes the location object and then a pushpin options object. In the options we define the title for the pin and the color. We also set the text for the pin to a numeric value which increments for each pin we’re adding. That way each pin has a number which corresponds to its position in our list of requests. With all of the pushPin data populated we can push the pin into the map’s entities array.

Once we’ve added all of the pins the final step is to define the view for the map so that it centers on and displays all of the pins we have added. I’ve achieved that here by defining a rectangle using the Microsoft.Maps.LocationRect.fromLocations helper function. We can then call setView on the map object, passing in that rectangle as the bounds value. We also include a padding value to ensure there is a little extra space around the outlying pins.

With these few sections of JavaScript code when we load our page the map is displayed with pins corresponding to the location of our requests. Here is what the final map looks like within our allReady application.

Resulting Bing Map