Things I’ve Learnt This Week (19th February)

Week 4 of my series, sharing things I’ve learnt, read, watched and listened to, in the pursuit of expanding my knowledge about software development. A slightly shorter set of links this week, things have been fairly busy so I’ve had less time to keep up with all of the fantastic content.

Things I’ve Learned

That I enjoy technical speaking! This week, I delivered a talk to a room of developers about ASP.NET Core. It’s a popular topic which I think generated the interest and high attendance. I’m quite shy and an introvert by nature, and I generally hate any form of public speaking so presenting a talk is way out of my comfort zone. However, it’s something I’ve been keen to work on; It’s a great way to share information and helps me learn as well. I had the usual nerves leading up to the talk, but I’d practised and refined it over a number of rehearsals, to the point that I was confident in what I was delivering. The talk also included a 30 minute live demo, which thankfully worked perfectly thanks to many practice runs. As soon as I got going I started to feel better and by the end, was on a bit of a natural high. I’ve had some very nice feedback from some of the attendees and I’m encouraged to work on future talks as well as hopefully sharing this one with a wider audience in the future. If I was asked for advice from other developers looking to prepare a technical talk, it would be practice, practice, practice! Having gone through my slides, rehearsing the full talk at least 10 times, I knew what I was going to say and that allowed me to present confidently.

Things I’ve Read

Things I’ve Listened To

Things I’ve Watched

Read More

A Reminder to Take Care when Registering Dependencies

I looked into a “fun” little problem yesterday where we were seeing occasional errors in some of our ASP.NET Core code which calls down into the ASP.NET Core Identity UserManager. We were getting a range of NullReferenceException and ObjectDisposedExceptions as well as various exceptions from Npgsql.NpgsqlConnection (we use Postgres rather than SQL in this project) stating things such as “Connection already open”.

The issue was presenting itself within some authentication and authorisation code we have which wraps and extends the ASP.NET Core Identity UserManager and SignInManager functionality. We use ASP.NET Identity over a PostGres database for our user store but include some application specific functionality with our own code. We have our own AuthenticationManager and UserManager classes, both of which take dependencies on the underlying Microsoft.AspNetCore.Identity classes.

These originally got registered in in the ConfigureServices method of the Startup.cs class as follows:

services.AddSingleton<IAuthenticationManager, AuthenticationManager>();
services.AddSingleton<IUserManager, UserManager>();

The constructor for our AuthenticationManager looks a bit like this:

public AuthenticationManager(UserManager<ApplicationUser> userManager, SignInManager<ApplicationUser> signInManager)
{
   // setup our authentication manager here
}

Do you see the problem?

The issue here is the singleton registration. While our classes themselves have no state and could be shared between requests, the dependencies on Microsoft.AspNetCore.Identity.UserManager<T> and Microsoft.AspNetCore.Identity.SignInManager<T> have to be considered.

If we take a look at where these are registered within the Microsoft.AspNetCore.Identity source we can see the following:

services.TryAddScoped<UserManager<TUser>, UserManager<TUser>>();
services.TryAddScoped<SignInManager<TUser>, SignInManager<TUser>>();

They are added using the scoped lifetime which means they expect to be created once per request. They themselves depend on an IUserStore which is registered with the scoped lifetime as well.

As a result, the singleton registration of our AuthenticationManager was trying to hang onto dependencies to objects for the entire application lifetime, where those dependencies only expected to live for the request scope. Sometimes they seemed to get a different database context and hence the “Connection already open” errors we saw. Sometimes the dependencies had been disposed of by the time they got called by our code and as such we saw the various exceptions being thrown. In some cases we seemed to still have access to the UserManager but the context underneath was null. I won’t dive into this tool deeply, but it was apparent that we should really make the registration of our classes scoped as well. This way, they are created once per request, the same as their dependencies. We changed the registration code to the following:

services.AddScoped<IAuthenticationManager, AuthenticationManager>();
services.AddScoped<IUserManager, UserManager>();

This resolved the errors we were seeing immediately. It was an annoying oversight although fortunately it was fairly easy to guess at the cause. The various errors all suggested that we having some issues with the lifetime of the  dependencies. This case has been a reminder to carefully consider the lifetimes of service registrations as it can produce some unexpected errors and behaviour.

The Microsoft documentation even includes a large warning about scoped services – “The main danger to be wary of is resolving a Scoped service from a singleton. It’s likely in such a case that the service will have incorrect state when processing subsequent requests.” – Oh how true this is!

Happy dependency injecting!

Read More

Things I’ve Learnt This Week (5th February)

I’m keeping up my plan (week 2 yay!) to record and share things I’ve learned during the last week. Less from me this week as it’s been fairly hectic and I’ve been feeling a bit unwell.

Things I’ve Learned

Identity Server 4

As part of my work for Humanitarian Toolbox I’ve been actively investigating Identity Server 4 as an option to handle our authentication. The allReady application currently uses ASP.NET Core Identity within the application to support login and user management. As the product nears v1 release discussions have begun around the use cases for the application, including potential for multi-tenancy and considerations around storage of user accounts. As a result this led Richard Campbell to suggest we look into Identity Server 4 to help with this identity flow.

I was lucky enough to have an audience with Brock Allen and Dominick Baier this week to chat through some of the basics about Identity Server. It was really useful to chat with them both and I want to thank them for offering their time and support to the project. One of the key take-aways for me from the call was getting a better understanding of where Identity server fits into the puzzle. It’s about authentication and helping with the protocol of OAuth2 / OpenId communications. It’s not a user management / user store product, although it does sit nicely on top of ASP.NET Core Identity 3 as an option.

As I continue investigating and testing Identity Server 4, I hope to put together some more details posts about how we’re using it and what I learn along the way.

Things I’ve Read

In no particular order here’s some of the blogs and posts that I’ve read this week.

Things I’ve Listened To

Things I’ve Watched

Read More

Things I’ve Learnt This Week (29th January)

Whether this will become a regular thing, I’m not sure. But starting this week I’ve been keeping notes on the things I’ve learned, problems I’ve faced and resources that I’ve read, watched or listened to.

I try to consume as much information as possible about ASP.NET and development in a continual drive to learn more and get better at what I do. This includes listening to a regular set of podcasts on my daily commute, reading any blog posts that I can find that relate to things I do or may be doing in the future, and watching videos online. My current focus is around ASP.NET Core so a bulk of the materials I am reading tend to be focused in that area.

I won’t go into explicit details in these posts, as realistically I won’t have time. But I hope to highlight key points of information I have found useful and to share links to things I’ve learned from, hopefully so that others sharing my passion can save some time. It’s also a shameless way to help me remember things as my brain will only hold information for so long!

Things I’ve Learned

Not an exhaustive list (as we’re always learning and that’s one of the things I love about development) but here are a few key things which came to mind after the week has ended. I’ll contain this section to small snippets of information that do not generally warrant a longer, dedicated post.

ASP.NET Core RTM SDK Tooling

I picked up on a point that Damian Edwards mentioned on the weekly ASP.NET community standup this week around the final SDK tooling where I thought I heard him say that for RTM tooling we had to be using VS 2017 when it’s released. I must admit I hadn’t realised this or considered the implications of the move to a refined csproj (from project.json) for ASP.NET Core.

I tweeted Damian to clarify this and he was kind enough to answer my questions. The outcome, as I’ve interpreted it, is that indeed there will be no supported RTM tooling for ASP.NET Core on Visual Studio 2015. The tooling we have now which is preview tooling, will remain available, but unsupported. To get a supported ASP.NET Core tooling experience, developers will need to move to VS 2017 or use VS Code.

The nature of the all new csproj format is that it cannot/will not be implemented in VS 2015. Any projects which are opened on VS 2017 will auto migrate to the newer csproj format and after that, cannot be developed on VS 2015 any longer. It also seems that when ASP.NET Core 2.x lands, that will be csproj and VS 2017 supported only.

I was a bit shocked to learn (and perhaps I was just slow on the uptake) that the above was the case. I had assumed we might get a RTM tools for VS 2015 ASP.NET Core since people have adopted this new platform and will be left with an upgrade if they do want to continue with the full IDE and proper support. Working on an open source ASP.NET Core project as I do, this means we have to think carefully about when / if we bite the bullet and force a VS 2017 / VS Code only experience by upgrading the project.

Setting cache expiry for static files using OWIN

This week I needed to ensure that javascript and css files served via our site had a proper long cache expiry to help optimise page load times. I’ve used middleware in ASP.NET Core a fair bit, but not touched the ASP.NET 4.x OWIN code much. Our project was using the Microsoft.Owin.StaticFiles.StaticFileMiddleware for serving the static files and it turned out that the change to add the Expiry header was very similar to code I’d used in ASP.NET Core for a different, but similar requirement.

In our case all I had to do was ensure that the StaticFileOptions passed into the StaticFileMiddleware included an OnPrepareResponse action to handle setting the expires header like so.

return new StaticFileOptions
{
	OnPrepareResponse = (ctx) =&amp;amp;amp;amp;amp;amp;amp;gt;
	{
		ctx.OwinContext.Response.Expires = DateTimeOffset.UtcNow.AddYears(1);
	}
};

Misc

  • In VS 2017 we will be able to remote debug ASP.NET Core over SSH.

Things I’ve Read

In no particular order here’s some of the blogs and posts I read this week.

Things I’ve Listened To

 Things I’ve Watched

  • Domain-Driven Design: The Good Parts – Jimmy Bogard – I’m keen to understand DDD better and this was a nice talk on some of the concepts. I’m still keen to find some more code based examples but I felt this provided some good foundations.
  • ASP.NET Community Standup – I listen to this weekly to catch up on what’s happening with ASP.NET Core.
  • Public speaking with Scott Hanselman, Kendra Havens, Maria Naggaga Nakanwagi, Kasey Uhlenhuth, and Donovan Brown – This really came at a great time as I start to work on my public speaking. I want to be able to share my passion for ASP.NET Core with others and while I’ve done a few smaller talks at work I want to build up the level, quality and quantity of speaking I do. I will be continuing to build my confidence in smaller groups at work but I would like to get to the point where I can do wider audiences of strangers. I have my next talk on ASP.NET Core nearly finalised for a group of developers at work.

Read More

CQRS with Mediatr and ASP.NET Core Implementing basic CQRS with ASP.NET Core

I was first introduced to the Mediatr library when I started contributing to the allReady project. It is now being used quite extensively within that application. It has proven to be very useful in decoupling code and separating the concerns. Contributors to the project have recently worked through a good chunk of the codebase and moved many database commands and queries over to the Mediatr request/response pattern. This is allowing us to move away from a large data access wrapper to multiple handlers that clearly handle one function and which are much easier to maintain. This has led to smaller, more testable classes and made the code easier to read as a result.

CQRS Overview

Before going into Mediatr specifically I feel it’s worth briefly talking about Command Query Responsibility Segregation or CQRS for short. CQRS is a pattern that seeks to separate the code and models which perform query logic from the code and models which perform commands such as an insert or update. In each case the model to define the input and output usually differs. By separating the commands and queries it allows the input/output models to be more focused on the specific task they are performing. This makes testing the models simpler since they are less generalised and are therefore not bloated with additional code. Rather than returning an entire database model, a query response model will usually contain only a subset of a table’s fields and possibly data from many related objects, all needed to form a particular view. The input model for a query may be very small. Commands on the other hand will usually require larger input models which more closely map to a full database table and have slimmer response models. Commands may perform some business logic on the properties in order to validate the object before saving it into a database. By contrast the models used for a query will generally contain less business logic.

As with any pattern, there are pros and cons to consider. Some may feel that the complexity added by having to manage different models may outweigh the benefits of separating them. Also, as with all patterns, the concept can be taken too far and start to become a burden on productivity and readability of the code. Therefore the degree to which one uses the CQRS pattern should be governed by each use case. If it’s not providing value, then don’t use it!

Coming back to the allReady project; the approach taken there has been to separate the querying of data used to build the view models from the commands used to update the database. Queries occur far more often than commands, as each page load will need to build up a view model, often with calls to the database to pull in relevant data. By keeping the queries distinct from the commands we can manage the exact shape of the input as well as the size of the data being returned. Queries need to perform quickly since they have a direct effect on user experience and page loads times. Keeping the models as slim as possible and only querying for the required database columns can help the overall performance.

Back to Mediatr

The Mediatr library provides us with a messaging solution and is a nice fit to help us introduce some concepts from the CQRS pattern into our code. In allReady it has allowed the team to greatly simplify the controllers and in many cases they now have a single dependency on Mediatr which is injected by the built in ASP.NET Core dependency injection. The MVC actions use Mediatr to send messages for the data they need to populate the view (queries) or to perform actions that update the database (commands).

Mediatr has the concept of handlers which are responsible for dealing with a query or command message. A handler is setup to handle a particular message which will contain the input needed for the command or query. A query message will usually need only a few properties, perhaps just an id of the object to query for. A command message may contain a more complete object with all of the model’s properties that need to be updated by the handler.

Using Mediatr with ASP.NET Core

Using Mediatr in an ASP.NET Core project is pretty straightforward. There are a couple of steps required in order to set things up.

Firstly we need to bring in the Mediatr package from Nuget. The quickest way is to use the package manager console by issuing the command “Install-Package MediatR”. At the time of writing the current version is 2.0.2.

Now that we have Mediatr added to our project we need to register it’s classes with the ASP.NET Core Dependency Injection (DI) container. The exact way you do this will depend on which DI container you are using. I’m going to show how I’ve got it working in ASP.NET Core with the default container. I ended up pretty much following a great Gist that I found. It got me started with registering Mediatr and it’s delegate factories so all credit to the author.

Within the Startup.cs class ConfigureServices method I added the following code to register Mediatr.

services.AddScoped<IMediator, Mediator>();
services.AddTransient<SingleInstanceFactory>(sp => t => sp.GetService(t));
services.AddTransient<MultiInstanceFactory>(sp => t => sp.GetServices(t));
services.AddMediatorHandlers(typeof(Startup).GetTypeInfo().Assembly);

First I add the Mediatr component itself. There are also two delegate types for the Mediatr factories which must be registered. The final line calls an extension method which will look through the assembly and ensure that any class which is a type of IRequestHandler or IAsyncRequestHandler is registered. By reflecting through the assembly in this way we avoid having to manually map each handler in DI when we create it.

public static class MediatorExtensions
{
	public static IServiceCollection AddMediatorHandlers(this IServiceCollection services, Assembly assembly)
	{
		var classTypes = assembly.ExportedTypes.Select(t => t.GetTypeInfo()).Where(t => t.IsClass && !t.IsAbstract);

		foreach (var type in classTypes)
		{
			var interfaces = type.ImplementedInterfaces.Select(i => i.GetTypeInfo());

			foreach (var handlerType in interfaces.Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IRequestHandler<,>)))
			{
				services.AddTransient(handlerType.AsType(), type.AsType());
			}

			foreach (var handlerType in interfaces.Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IAsyncRequestHandler<,>)))
			{
				services.AddTransient(handlerType.AsType(), type.AsType());
			}
		}

		return services;
	}
}

The AddMediatorHandlers method first finds all class types in the assembly. It loops through each class and gets it’s interfaces. If any of the interfaces are an IRequestHandler or IAsyncRequestHandler then we add a transient mapping to the services collection.

If you need further details or samples for registering Mediatr with a different DI container I recommend you check out the wiki on Github which contains some setup guidance and links to samples.

Messages and Handlers

The pattern we’ve employed in allReady is to use the Mediatr handlers to return ViewModels needed by our actions. An action will send a message of the correct type to the Mediatr instance and expect a ViewModel in return. All of the logic to handle the DB queries which fetch the data needed to build up the view model are contained within the handler. We also use Mediatr to issue and handle commands for HTTP post/put/delete request actions. These actions will often need to update a record in the database. We send the created/updated object in the message and a handler picks it up, processes it and returns a success or failure result back to the action.

You can also chain Mediatr handlers by having a handler send out it’s own message which allows you to compose queries to get the data you need. For example if you have a handler which reads a user record from a database, this same user model may be needed as part of multiple view models. Rather than code the same database query each time within each handler, you can place your data access query inside a single handler. This handler can then return the user data to any other handler which sends a message for the user data. This allows us to adhere to the don’t repeat yourself principle by writing the code and logic only once. We can also test that logic to ensure that it works as expected and be confident that as everyone uses it they can expect consistent responses.

To create a request message in Mediatr you create a basic class marked as an implementation of the IRequest or IAsyncRequest interface. I try to use async methods for everything I do in ASP.NET Core so I’ll stick to async examples in this post. You can optionally specify the return type you expect from the handler. An async handler will return that object wrapped in a task which can be awaited.

Your message class will define all of the properties expected to be in the message. Here is an example of a basic message which will send an Id out and which expects the response from the handler to be a UserViewModel.

public class UserQuery : IAsyncRequest<UserViewModel>
{
	public int Id { get; set; }
}

With a request message defined we can now go ahead and create a handler that will respond to any messages of that type. We need to make our class implement the IRequestHandler or in my case IAsyncRequestHandler interface, defining the input and output types.

public class UserQueryHandlerAsync : IAsyncRequestHandler<UserQuery, UserViewModel>
{
    public async Task<UserViewModel> Handle(UserQuery message)
    {
        // Could query a db here and get the columns we need.
        
        viewModel = new UserViewModel();
        viewModel.UserId = 100;
        viewModel.Username = "sgordon";
        viewModel.Forename = "Steve";
        viewModel.Surname = "Gordon";

        return viewModel;
    }
}

This interface defines a single method named Handle which returns a Task of your output type. This expects your request message object as it’s parameter.

In my example I’m simply newing up a UserViewModel object, setting it’s properties and returning it. In the real world this would be where I query the database using Entity Framework and build up my view model from the resulting data.

I personally have been in the habit of keeping my request message and my response handler classes together in the same physical .cs file, but you can split them if you prefer. I’m normally keen on keeping one class to one file, but in this case since the two classes are very interrelated I’ve found it quicker to work when I can see both in the same file.

We now have everything wired up so finally it’s time to send a message from our controller.

public class UsersController : Controller
{
    private readonly IMediator _mediator;

    public UsersController(IMediator mediator)
    {
        if (mediator == null)
            throw new ArgumentNullException(nameof(mediator));

        _mediator = mediator;
    }

    [HttpGet]
    [Route("users/{userId}")]
    public async Task<IActionResult> UserDetails(int userId)
    {
        UserViewModel model = await _mediator.SendAsync(new UserQuery { Id = userId });

        if (model == null)
            return HttpNotFound();

        return View(model);
    }
}

The key things to highlight here are the controller’s constructor accepting an IMediatr object. This will be injected by the ASP.NET Core DI when the application runs. What’s very useful is that we can easily mock an IMediatr and it’s response which makes testing a breeze.

The UserDetails action itself expects a user id when it is called. This id gets bound from the route parameter by MVC.

The key line in the code above is where we send the mediator message. We do this by calling SendAsync on the IMediatr object. We send a UserQuery object with the Id property set. This message will now be managed by Mediatr. It will locate the suitable handler, pass it the request message and return the response to our action.

As you can see, this has made our controller very light. The only code left is a basic check to return an appropriate not found response if the response to our Mediatr request is null. That won’t ever be true in my example, but in a real world app if the database doesn’t find an object with the id provided I return null instead of a UserViewModel. This is exactly how I like a controller to be, it’s single responsibility is to send the client a HTTP response of some kind to the user’s request. It doesn’t and shouldn’t need to know about our database or have any concerns with building up it’s view model directly.

Testing

Being good citizens we should always consider the testing process. Testing when using Mediatr and a CQRS style pattern is very simple. My approach has been to ensure that each handler has appropriate unit tests around the handle method testing the logic within. To do this we can new up a Mediatr handler in our test class and then we can call the Handle method direct and run tests on the returned object to verify the result.

[Fact]
public async Task HandlerReturnsCorrectUserViewModel()
{ 
    var sut = new UserQueryHandlerAsync();
    var result = await sut.Handle(new UserQuery { Id = 100 });

    Assert.NotNull(result);
    Assert.Equal("Steve", result.Forename);
}

This is a bit of a contrived example, especially as my handler example really doesn’t perform any logic. However we can test for whatever is necessary on the returned result. You can check out the allReady code on Github to see some real examples of tests around the handlers used there. In those cases we often use an in memory Entity Framework DbContext object so that we can test the handler’s EF query returns the expected data from a known set of test data.

We can also test the controllers very easily by passing in a mock of the IMediatr.

[Fact]
public void UserDetails_SendsQueryWithTheCorrectUserId()
{
    const int userId = 1;
    var mediator = new Mock<IMediator>();
    var sut = new UserController(mediator.Object);

    sut.UserDetails(userId);

    mediator.Verify(x => x.SendAsync(It.Is<UserQuery>(y => y.EventId == userId)), Times.Once);
}

We create a mock IMediatr using Moq and pass that in when instantiating a controller. Here I’ve called the UserDetail action with an Id and verified that a query has been sent to the mediator containing that Id.

If necessary you can setup your IMediatr mock so that you define the data that is returned in response to a message. This can be useful if you want to validate your action’s behaviour to different responses. You can mock up the response object using code such as…

var user = new UserViewModel
{
    viewModel.UserId = 100,
    viewModel.Username = "sgordon",
    viewModel.Forename = "Steve",
    viewModel.Surname = "Gordon",
};

var mediator = new Mock<IMediator>();
mediator.Setup(x => x.SendAsync(It.IsAny<UserQuery>())).Returns(user);

If your controller performs any logic based on the returned object you can now easily specify the different scenarios to test that. Something I often do is to write a test that verifies that when the Mediatr response is null the action sends a HttpNotFound result. In a simple example that can be done in the following way…

[Fact]
public async Task UserDetailsReturnsHttpNotFoundResultWhenUserIsNull()
{
    var mediator = new Mock<IMediator>();

    var sut = new UserController(mediator);

    var result = await sut.UserDetails(It.IsAny<int>());

    Assert.IsType<HttpNotFoundResult>(result);
}

Summing Up

I’ve really taken to the pattern that Mediatr allows us to easily implement. It’s a personal choice of course but my view is that it keeps my controllers clean and allows me to create handlers that have a single responsibility. It keeps things nicely separated as nothing it too tightly bound together. I can easily change the behaviour of a handler and as long as it still returns the correct object type my controllers never care.

As I’ve shown the testing process is pretty nice and if we ensure each handler is tested as well as the controllers, then we have good coverage of the behaviours we expect from the classes. A big bonus is that it already supports ASP.NET Core and is pretty simple to setup with the built-in DI container.

Mediatr also supports a publisher/subscriber pattern which I’ve yet to need in my code. It’s something worth taking a look at though if you need multiple handlers to respond when an event occurs. It’s something that I plan to look into at some point.

I highly recommend trying out the Mediatr library and reviewing the pattern being used on the allReady project. It takes little time to setup and quickly become a comfortable flow when writing code. It’s made me think about what my models are involved in and helped me keep them focused and more robust.

NOTE: This post was written based on RC1 of ASP.NET Core and may not be current by the time RC2 and RTM are released.

Read More

Extending the ASP.NET Core 1.0 Identity SignInManager Adding basic user auditing to ASP.NET Core

So far I have written a couple of posts in which I dive into the code for the ASP.NET Core 1.0 Identity library. In this post I want to do something a little more practical and look at extending the default identity functionality. I’m working on a project at the moment which will be very reliant on a strong user management system. As I move forward with that and build up the requirements I will need to handle things not currently available in the Identity library. Something missing from the current Identity library is user security auditing, an important feature for many real world applications where compliance auditors may expect such information to be available.

Before going further, please note that this code is not final, production ready code. At this stage I want to prove my concept and meet some initial requirements that I have. I expect I’ll end up extending and refactoring this code as my project develops. Also, at the time of writing ASP.NET Core 1.0 is at release candidate 1. We can expect some changes in RC2 and RTM which may require this code to be adjusted. Feel free to do so, but copy and paste at your own risk!

At this stage in my project, my immediate requirement is to store successful login, failed login and logout events in an audit table within my database. I would like to collect the visitor IP address also. This data might be useful after some kind of security breach; for example to review who was logged into the system as well as where from. It would also allow for some analysis of who is using the application and how often / at what times of day. Such data may prove useful to plan upgrades or to encourage more use of the application. Remember that if you record this information, particularly within a public facing SaaS style application, you may well need to include details of what you’re data recording and why, in your privacy policy.

I could implement this auditing functionality within my controllers. For example I could update the Login action on the Account controller to write into an audit table directly. However I don’t really like that solution. If anyone implements a new controller/action to handle login or logout then they would need to remember to also add code to update the audit records. It makes the Login action method more responsible than it should be for performing the audit logic, when really this belongs deeper in the application.

If we take a look at the Login action on the Account controller we can see that it calls into an instance of a SignInManager. In a default MVC application this is setup in the dependency injection container by the call to AddIdentity within the Startup.cs class. The SignInManager provides the default implementations of sign in and sign out logic. Therefore this is a better candidate in which to override some of those methods to include my additional auditing code. This way, any calls to the sign in manager, from any controller/action will run my custom auditing code. If I need to change or extend my audit logic I can do so in a single class which is ultimately responsible for handling that activity.

Before doing anything with the SignInManager I needed to define a database model to store my audit records. I added a UserAudit class which defines the columns I want to store:

public class UserAudit
{
	[Key]
	public int UserAuditId { get; private set; }

	[Required]
	public string UserId { get; private set; }

	[Required]
	public DateTimeOffset Timestamp { get; private set; } = DateTime.UtcNow;

	[Required]
	public UserAuditEventType AuditEvent { get; set; }

	public string IpAddress { get; private set; }   

	public static UserAudit CreateAuditEvent(string userId, UserAuditEventType auditEventType, string ipAddress)
	{
		return new UserAudit { UserId = userId, AuditEvent = auditEventType, IpAddress = ipAddress };
	}
}

public enum UserAuditEventType
{
	Login = 1,
	FailedLogin = 2,
	LogOut = 3
}

In this class I’ve defined an Id column (which will be the primary key for the record), a column which will store the user Id string, a column to store the date and time of the audit event, a column for the UserAuditEventType which is an enum of the 3 available events I will be auditing and finally a column to store the user’s IP address. Note that I’ve made the UserAuditId a basic auto-generated integer for simplicity in this post, however in my final code I’m very likely going to use fluent mappings to make a composite primary key based on user id and the timestamp instead.

I’ve also included a static method within the class which creates a new audit event record by taking in the user id, event type and the ip address. For a class like this I prefer this approach versus exposing the property setters publically.

Now that I have a class which represents the database table I can add it to the entity framework DbContext:

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
	public DbSet<UserAudit> UserAuditEvents { get; set; }
}

At this point, I have a new table defined in code which needs to be physically created in my database. I will do this by creating a migration and applying it to the database. As of ASP.NET Core 1.0 RC1 this can be done by opening a command prompt from my project directory and then running the following two commands:

dnx ef migrations add “UserAuditTable”

dnx ef database update

This creates a migration which will create the table within my database and then runs the migration against the database to actually create it. This leaves me ready to implement the logic which will create audit records in that new table. My first job is to create my own SignInManager which inherits from the default SignInManager. Here’s what that class looks like before we extend the functionality:

public class AuditableSignInManager<TUser> : SignInManager<TUser> where TUser : class
{
	public AuditableSignInManager(UserManager<TUser> userManager, IHttpContextAccessor contextAccessor, IUserClaimsPrincipalFactory<TUser> claimsFactory, IOptions<IdentityOptions> optionsAccessor, ILogger<SignInManager<TUser>> logger)
		: base(userManager, contextAccessor, claimsFactory, optionsAccessor, logger)
	{
	}
}

I define my own class with it’s constructor inheriting from the base SignInManager class. This class is generic and requires the type representing the user to be provided. I also have to implement a constructor, accepting the components which the original SignInManager needs to be able to function. I pass these objects into the base constructor.

Before I implement the logic and override some of the SignInManager’s methods I need to register this custom SignInManager class with the dependency injection framework. After checking out a few sources I found that I could simply register this after the AddIdentity services extension in my StartUp.cs class. This will then replace the SignInManager previously registered by the Identity library.

Here’s what my ConfigureServices method looks like with this code added:

public void ConfigureServices(IServiceCollection services)
{
	// Add framework services.
	services.AddEntityFramework()
		.AddSqlServer()
		.AddDbContext<ApplicationDbContext>(options =>
			options.UseSqlServer(Configuration["Data:DefaultConnection:ConnectionString"]));

	services.AddIdentity<ApplicationUser, IdentityRole>()
		.AddEntityFrameworkStores<ApplicationDbContext>()
		.AddDefaultTokenProviders()
		.AddUserManager<AuditableUserManager<ApplicationUser>>();

	services.AddScoped<SignInManager<ApplicationUser>, AuditableSignInManager<ApplicationUser>>();

	services.AddMvc();

	// Add application services.
	services.AddTransient<IEmailSender, AuthMessageSender>();
	services.AddTransient<ISmsSender, AuthMessageSender>();
}

The important line is services.AddScoped<SignInManager<ApplicationUser>, AuditableSignInManager<ApplicationUser>>(); where I specificy that whenever a class requires a SignInManager<ApplicationUser> the DI container will return our custom AuditableSignInManager<ApplicationUser> class. This is where dependency injection really makes life easier as I don’t have to update multiple classes with concreate instances of the SignInManager. This one change in my startup.cs file will ensure that all dependant classes get my custom SignnManager.

Going back to my AuditableSignInManager I can now make some changes to implement the auditing logic I require.

public class AuditableSignInManager<TUser> : SignInManager<TUser> where TUser : class
{
	private readonly UserManager<TUser> _userManager;
	private readonly ApplicationDbContext _db;
	private readonly IHttpContextAccessor _contextAccessor;

	public AuditableSignInManager(UserManager<TUser> userManager, IHttpContextAccessor contextAccessor, IUserClaimsPrincipalFactory<TUser> claimsFactory, IOptions<IdentityOptions> optionsAccessor, ILogger<SignInManager<TUser>> logger, ApplicationDbContext dbContext)
		: base(userManager, contextAccessor, claimsFactory, optionsAccessor, logger)
	{
		if (userManager == null)
			throw new ArgumentNullException(nameof(userManager));

		if (dbContext == null)
			throw new ArgumentNullException(nameof(dbContext));

		if (contextAccessor == null)
			throw new ArgumentNullException(nameof(contextAccessor));

		_userManager = userManager;
		_contextAccessor = contextAccessor;
		_db = dbContext;
	}

	public override async Task<SignInResult> PasswordSignInAsync(TUser user, string password, bool isPersistent, bool lockoutOnFailure)
	{
		var result = await base.PasswordSignInAsync(user, password, isPersistent, lockoutOnFailure);

		var appUser = user as IdentityUser;

		if (appUser != null) // We can only log an audit record if we can access the user object and it's ID
		{
			var ip = _contextAccessor.HttpContext.Connection.RemoteIpAddress.ToString();

			UserAudit auditRecord = null;

			switch (result.ToString())
			{
				case "Succeeded":
					auditRecord = UserAudit.CreateAuditEvent(appUser.Id, UserAuditEventType.Login, ip);
					break;

				case "Failed":
					auditRecord = UserAudit.CreateAuditEvent(appUser.Id, UserAuditEventType.FailedLogin, ip);
					break;
			}

			if (auditRecord != null)
			{
				_db.UserAuditEvents.Add(auditRecord);
				await _db.SaveChangesAsync();
			}
		}

		return result;
	}

	public override async Task SignOutAsync()
	{
		await base.SignOutAsync();

		var user = await _userManager.FindByIdAsync(_contextAccessor.HttpContext.User.GetUserId()) as IdentityUser;

		if (user != null)
		{
			var ip = _contextAccessor.HttpContext.Connection.RemoteIpAddress.ToString();

			var auditRecord = UserAudit.CreateAuditEvent(user.Id, UserAuditEventType.LogOut, ip);
			_db.UserAuditEvents.Add(auditRecord);
			await _db.SaveChangesAsync();
		}
	}
}

Let’s step through the changes.

Firstly I specify in the constructor that I will require an instance of the ApplicationDbContext, since we’ll directly need to work with the database to add audit records. Again, constructor injection makes this nice and simple as I can rely on the DI container to supply the appropriate object at runtime.

I’ve also added some private fields to store some of the objects the class receives when it is constructed. I need to access the UserManager, DbContext and IHttpContextAccessor objects in my overrides.

The default SignInManager defines it’s public methods as virtual, which means that since I’ve inherited from it, I can now supply overrides for those methods. I do exactly that to implement my auditing logic. The first method I override is the PasswordSignInAsync method, keeping the signature the same as the original base method. I await and store the result of the base implementation which will actually perform the sign in logic. The base method returns a SignInResult object with the result of the sign in attempt. Now that I have this result I can use that to perform some audit logging.

I cast the user object to an IdentityUser so that I can access it’s ID property. Assuming this cast succeeds I can go ahead and log an audit event. I get the remote IP from the context, then I inspect the result and call it’s ToString method(). I use a switch statement to generate an appropriate call to the CreateAuditEvent method passing in the correct UserAuditEventType. If a UserAudit object has been created I then write it into the database via the DbContext that was injected into this class when it was constructed.

I have a very similar override for the SignOutAsync method as well. In this case though I have to get the user via the HttpContext and use the UserManager to get the IdentityUser based on their user id. I can then write a logout audit record into the database. Running my application at this stage and performing some logins, login attempts with an incorrect password and logging out I can check my database and see some records being stored in the database.

db

Summing Up

Whilst not yet fully featured, this blog post hopefully demonstrates the initial steps that we can follow to quite easily extend and override the ASP.NET Core Identity SignInManager class with our own implementation. I expect to be refactoring and extending this code further as my requirements determine.

For example, while the correct place to call the auditing logic is from the SignInManager, I will likely create an AuditManager class which should have the responsibility to actually create and write the audit records. If I do this then I will still need my overridden SignInManager class which would require an injected instance of the AuditManager. As my audit needs grow, so will my AuditManager class and some code will likely get reused within that class.

Including an extra class at this stage would have made this post a bit more complex and have taken me away from my initial goal of showing how we can extend the functionality of the SignInManager class. I hope that this post and the code samples prove useful to others looking to do similar extensions to the default behaviour.

Read More

How to Send Emails in ASP.NET Core 1.0

ASP.NET Core 1.0 is a reboot of the ASP.NET framework which can target the traditional full .NET framework or the new .NET Core framework. Together ASP.NET Core and .NET Core have been designed to work cross platform and have a lighter, faster footprint compared to the current full .NET framework. Many of the .NET Core APIs are the same as they are in the full framework and the team have worked hard to try and keep things reasonably similar where it makes sense and is practical to do so. However, as a consequence of developing a smaller, more modular framework of dependant libraries and most significantly making the move to support cross platform development and hosting; some of libraries have been lost. Take a look at this post from Immo Landwerth which describes the changes in more detail and discusses considerations for porting existing applications to .NET Core.

I’ve been working with ASP.NET Core for quite a few months now and generally I have enjoyed the experience. Personally I’ve hit very few issues along the way and expect to continue using the new framework going forward wherever possible. Recently though I did hit a roadblock on a project at work where I had a requirement to send email from within my web application. In the full framework I’d have used the SmtpClient class in system.net.mail namespace. However in .NET Core this is not currently available to us.

Solutions available in the cloud world include services such as SendGrid; which, depending on the scenario I can see as a very reasonable solution. For my personal projects and tests this would indeed be my preferred approach, since I don’t have to worry about maintaining and supporting an SMTP server. However at work we have SMTP systems in place and a specialised support team who manage them, so I ideally needed a solution to allow me to send emails directly as we do in our traditionally ASP.NET 4.x applications.

As with most coding challenges I jumped straight onto Google to see who else had had this requirement and how they solved the problem. However I didn’t find as many documented solutions that helped me as I was expecting to. Eventually I landed on this issue within the corefx repo on Github. That led me onto the MailKit library maintained by Jeffrey Stedfast and it turned out to be a great solution for me as it has recently been updated to work on .NET Core.

In this post I will take you through how I got this working for the two scenarios I needed to tackle. Firstly sending mail directly via an SMTP relay and secondly the possibility to save the email message into an SMTP pickup folder. Both turned out to be pretty painless to get going.

Adding MailKit to your Project

The first step is to add the reference to the NuGet package for MailKit. I now prefer to use the project.json file directly to setup my dependencies. You’ll need to add the MailKit library – which is at version 1.3.0-beta6 at the time of writing this post – to your dependencies section in the project.json file.

On a vanilla ASP.NET Core web application your dependencies should look like this:

Dependancies

Once you save the change VS should trigger a restore of the necessary NuGet packages and their dependencies.

Sending email via a SMTP server

I tested this solution in a default ASP.NET Core web application project which already includes an IEmailSender interface and a class AuthMessageSender which just needs implementing. It was an obvious choice for me to test the implementation using this class as DI is already hooked up for it. For this post I’ll show the bare bones code needed to get started with sending emails via an SMTP server.

To follow along, open up the MessageServices.cs file in your web application project.

We need three using statements at the top of the file.

using MailKit.Net.Smtp;
using MimeKit;
using MailKit.Security;

The SendEmailAsync method can now be updated as follows:

public async Task SendEmailAsync(string email, string subject, string message)
{
	var emailMessage = new MimeMessage();

	emailMessage.From.Add(new MailboxAddress("Joe Bloggs", "jbloggs@example.com"));
	emailMessage.To.Add(new MailboxAddress("", email));
	emailMessage.Subject = subject;
	emailMessage.Body = new TextPart("plain") { Text = message };

	using (var client = new SmtpClient())
	{
		client.LocalDomain = "some.domain.com";                
		await client.ConnectAsync("smtp.relay.uri", 25, SecureSocketOptions.None).ConfigureAwait(false);
		await client.SendAsync(emailMessage).ConfigureAwait(false);
		await client.DisconnectAsync(true).ConfigureAwait(false);
	}
}

First we declare a new MimeMessage object which will represent the email message we will be sending. We can then set some of it’s basic properties.

The MimeMessage has a “from” address list and a “to” address list that we can populate with our sender and recipient(s). For this example I’ve added a single new MailboxAddress for each. The basic constructor for the MailboxAddress takes in a display name and the email address for the mailbox. In my case the “to” mailbox takes the address which is passed into the SendEmailAsync method by the caller.

We then add the subject string to the email message object and then define the body. There are a couple of ways to build up the message body but for now I’ve used a simple approach to populate the plain text part using the message passed into the SendEmailAsync method. We could also populate a Html body for the message if required.

That leaves us with a very simple email message object, just enough to form a proof of concept here. The final step is to send the message and to do that we use a SmtpClient. Note that this isn’t the SmtpClient from system.net.mail, it is part of the MailKit library.

We create an instance of the SmtpClient wrapped with a using statement to ensure that it is disposed of when we’re done with it. We don’t want to keep connections open to the SMTP server once we’ve sent our email. You can if required (and I have done in my code) set the LocalDomain used when communicating with the SMTP server. This will be presented as the origin of the emails. In my case I needed to supply the domain so that our internal testing SMTP server would accept and relay my emails.

We then asynchronously connect to the SMTP server. The ConnectAsync method can take just the uri of the SMTP server or as I’ve done here be overloaded with a port and SSL option. For my case when testing with our local test SMTP server no SSL was required so I specified this explicitly to make it work.

Finally we can send the message asynchronously and then close the connection. At this point the email should have been fired off via the SMTP server.

Sending email via a SMTP pickup folder

As I mentioned earlier I also had a requirement to drop a message into a SMTP pickup folder running on the web server rather than sending it directly through the SMTP server connection. There may well be a better way to do this (I got it working in my test so didn’t dig any deeper) but what I ended up doing was as follows:

public async Task SendEmailAsync(string email, string subject, string message)
{
	var emailMessage = new MimeMessage();

	emailMessage.From.Add(new MailboxAddress("Joe Bloggs", "jbloggs@example.com"));
	emailMessage.To.Add(new MailboxAddress("", email));
	emailMessage.Subject = subject;
	emailMessage.Body = new TextPart("plain") { Text = message };

	using (StreamWriter data = System.IO.File.CreateText("c:\\smtppickup\\email.txt"))
	{
		emailMessage.WriteTo(data.BaseStream);
	}
}

The only real difference from my earlier code was the removal of the use of SmtpClient. Instead, after generating my email message object I create a steamwriter which creates a text file on a local directory. I then used the MimeMessage.WriteTo method passing in the base stream so that the RFC822 email message file is created in my pickup directory. This is picked up and sent via the smtp system.

Summing Up

MailKit seems like a great library and it’s solved my immediate requirements. There are indications that the Microsoft team will be working on porting their own SmtpClient to support ASP.NET Core at some stage but it’s great that the community have solved the problem for those adopting / testing .NET Core now.

Read More

ASP.NET Identity Core 1.0.0 (Under the Hood) Part 1. Where did my salt column go?

Let me prefix this post / series of blog posts with two important notes.

  1. I am not a security expert. This series of posts records my own dive into the ASP.NET Identity Core code, publicly available on GitHub which I’ve done for my own self-interest to try and understand how it works and what is available to me as a developer. Do not assume everything I have interpreted to be 100% accurate or any code samples as suitable copy and paste production code.
  2. This is written whilst reviewing source mostly from the 3.0.0-rc1 release tag. I may stray into more recent dev code if implementations have changed considerably, but will try to highlight when I do so. One very important point here is that at the time of writing this post Microsoft have announced a renaming strategy for ASP.NET 5. Due to the brand new codebase this is now being called ASP.NET Core 1.0 and the underlying .NET Core will be .Net Core 1.0. This is going to result in namespace changes. I’ve used the anticipated new namespaces here (and will update if things change again).

Goals of this post / series

My personal reasons for digging into this code in the first place are simply to learn and further my own understanding. I’ll be investigating the sections that interest me and summarising as I do. I’m not sure where my investigations will take me so this may one post or many. I’m blogging about my findings as I hope they may be of interest to others wanting to know more about what happens when using this library in your projects. I’m going to try to edit this together in a readable fashion but I do expect it may be a little jumpy as I focus on the parts of the code I personally wanted to know more about. If you’re reading this blog post and see errors (and I expect there will be some) then please comment or contact me so I can update as appropriate.

Password Basics

I feel that before I go further I should try to explain a few key terms and concepts around password security. There are many more detailed resources which discuss security that go deeper into this but this primer should help when reading the rest of the post. If you know all of this then you may want to skip down to the next section below where I’ll start looking at the ASP.NET Identity code specifically.

With any login system, at a minimum we need to store two key pieces of information. A username and a password. The username uniquely identifies the user within your application and the password is a piece of information that only they should hold, used to verify to the application that they are, who they claim to be. This is the concept of authentication within a software application. In order for our applications to work with users we need to store these pieces of information somewhere so that when the user provides them, the application can look them up and compare them. Very commonly this will involve a database store and in ASP.NET that will likely be an SQL database table.

A simple solution would be to store the username as a string value (VARCHAR) and perhaps do the same for the password. This meets the initial requirement as we can now compare these against what a user provides us and if they match, open the door to the application. But there is a lot more to be considered and the most important element is securing that password from malicious intent. The problem being faced is how to store the password in such a way that it’s not plainly readable to anyone who has, or gains access to the database in which it has been stored. It’s well accepted that people will take the path of least complexity with passwords and often use the same password in many places. They will also likely use the same username where they can (especially if that username is their email address). So the risks go way beyond just your application should a password be compromised. It could well open the door to many other places where your user has also registered.

While some might assume that the database is secure and provides all the safety one needs for their user’s data, recent and numerous data exposures have proven that there are many people out there who can probably get to the database if they want to. I’m not planning to dive into the ways and means of that now but if you want more info check out posts from the likes of Troy Hunt who cover some other security aspects to consider. Even if the database is not compromised storing the passwords in plain text would still be a very poor decision since your internal staff also should not have access to them. A password should be as a user expects and be a secret to themselves only. They should be able to expect that you will take due care and responsibility when working with such a piece of data.

So we now want to store the password in a form that is secure from anyone who is looking at the database table, but we also need to be able to verify the user at a later time by comparing their input, to the password stored for their account. A solution could be reversible encryption where we encrypt the password data when we save it to the database and then decrypt it later to compare it with user input. However that also has concerns and issues, again slightly outside the scope of my intentions for this blog post. The short version being that anything reversible still presents a fair degree of insecurity when it comes to passwords.

This is where the concept of a one way hash comes into the picture. A hash is a mathematically repeatable process whereby we can take the password, put it through hashing and then store that data, instead of a plain text password. The word “repeatable” is important here. The reason hashing works is that if you provide the same text and put it through the same hashing process, the result will always be the same. At login we take the provided password, hash it (using the same algorithm used when the password was first saved to the database) and then compare the hashed value against the stored value. If they match, we have authenticated the user.

The advantages are that if the database is compromised, the hashed value is more secure than plain text. The hash cannot be reversed to generate the original plain text password either. But we’re still not done. With the advent of faster processing, even hashes are breakable. For a start, users commonly use simple and guessable passwords. Attackers can take advantage of that by pre-calculating hash values for common passwords, using the same algorithms that an application might use to create a mapping between potential passwords and hashed passwords. They could then use this against a compromised database and quickly work out many of the passwords being used.

We therefore must again go a step further and try to secure against this. So finally we come to the concept of a salt which adds more complexity and uniqueness to the passwords being hashed. Think of an example where two users choose to use the same exact password for your system. Those passwords when hashed will be identical in the database. This could be valuable information to hackers. Therefore to make things more unique, we use a random set of characters, called a salt. This salt is added to the password so that we get a unique password and therefore unique hash for each user, even if they use the same passwords. This mainly helps defend from the dictionary based attacks as hackers would have to calculate the lookup table once per user with their salt.

Finally we have hashing iterations. Hashing once is a start but with modern computer power it’s pretty fast to generate a lookup table. Iterations help somewhat by making the process more computationally expensive and time consuming. The aim of which is to make hacking a password database too much work vs the potential reward.

ASP.NET Identity and Password Hashing

In Identity 2.0 the hashed password and the salt were stored in separate columns in the database. It was only when I looked at a database for a new ASP.NET Core website that I noticed that there was no salt column and I wondered what was happening (hence this blog post)!

So to understand let’s follow the flow of a registering user with a focus on the password so we can understand what ASP.NET Core Identity 1.0.0 is doing to securely save it to the database (and answer my question – where did the salt column go!)

We start in the AccountController with a call to CreateAsync in the Microsoft.AspNet.Identity.UserManager class passing in an IdentityUser object and the password to be hashed and saved.

After running through any password validators we call on an instance an  IPasswordHasher<TUser> (which is injected into the UserManager when it is first constructed).

The PasswordHasher is constructed with an instance of PasswordHasherOptions. The class PasswordHasherOptions defines the settings used for the password hashing. By default the compatibility mode is set to Identity v3 mode, but if you needed to support V2 and below this can be set in the options. There is also a setting for the iterations to use when creating the hash which by default is set to 10,000. Finally this class defines and creates a static instance of the RandomNumberGenerator object found in the mscorelib.

UserManager.CreateAsync calls the HashPassword method.

The HashPassword method checks the compatibility mode in the options and flows through the appropriate method to handle either V2 and V3 identity. We’ll look at the V3 flow here using HashPasswordV3.

Here is the code…

private byte[] HashPasswordV3(string password, RandomNumberGenerator rng)
{
    return HashPasswordV3(password, rng,
    prf: KeyDerivationPrf.HMACSHA256,
    iterCount: _iterCount,
    saltSize: 128 / 8,
    numBytesRequested: 256 / 8);
}

private static byte[] HashPasswordV3(string password, RandomNumberGenerator rng, KeyDerivationPrf prf, int iterCount, int saltSize, int numBytesRequested)
{
    // Produce a version 3 (see comment above) text hash.
    byte[] salt = new byte[saltSize];
    rng.GetBytes(salt);
    byte[] subkey = KeyDerivation.Pbkdf2(password, salt, prf, iterCount, numBytesRequested);

    var outputBytes = new byte[13 + salt.Length + subkey.Length];
    outputBytes[0] = 0x01; // format marker
    WriteNetworkByteOrder(outputBytes, 1, (uint)prf);
    WriteNetworkByteOrder(outputBytes, 5, (uint)iterCount);
    WriteNetworkByteOrder(outputBytes, 9, (uint)saltSize);
    Buffer.BlockCopy(salt, 0, outputBytes, 13, salt.Length);
    Buffer.BlockCopy(subkey, 0, outputBytes, 13 + saltSize, subkey.Length);
    return outputBytes;
}

Breaking down the parameters of the main overloaded method:

  1. A string representing the password to be hashed
  2. An instance of a RandomNumberGenerator (defined in the mscorelib assembly). I’m not going to delve into how that code works here but the CoreCLR project on GitHub contains the code if you want to take a look for yourself.
  3. A choice of the PRF (pseudorandom function family) to use for the key derivation. In this case HMACSHA256 is the standard for Identity V3
  4. An integer representing the iteration count to use when hashing (coming from the PasswordHasherOptions)
  5. The size (number of bytes) to be used for the salt – 128 bits / 8 to calculate the byte size of 16
  6. The number of bytes for the hashed password – 256 bits / 8 to calculate the byte size of 32

A new byte array is created to hold the 16 byte salt which is generated with a call to GetBytes(salt) on the RandomNumberGenerator. This populates the byte array with 16 random bytes. Then a hashed password is created using PBKDF2 (Password-Based Key Derivation Function 2) key derivation, taking the password, the salt, the PRF of HMACSHA256, the number of iterations to use and the number of bytes required as the length of the hashed password.

We now have a hashed password and the random salt used to generate that password. In earlier versions of identity I saw these being saved into two unique columns inside the database. However with Identity 3.0 / ASP.NET Core Identity 1.0 there is only the PasswordHash column. To save the password and hash a new byte array is created. It is the size of the salt + hashed password + 13 extra bytes used to store some meta data about the hashing which took place. These form a single byte array which will be base64 encoded as a string to save into the database.

Construction of the Final Byte Array

The first byte is a format marker and for Identity 3.0 / ASP.NET Core 1.0.0 this is set to 1 (defined in hex).

The next byte stores the PRF that was used. This is the value of the enum available in Microsoft.AspNetCore.Cryptography.KeyDerivation. This belongs in the ASP.NET DataProtection assembly (source on github)

The next 4 bytes store the iteration count used to generate the hash

The next 4 bytes store the salt size

Notice with each of these items there is a call to WriteNetworkByteOrder, a helper method within the PasswordHasher.


private static void WriteNetworkByteOrder(byte[] buffer, int offset, uint value)
{
    buffer[offset + 0] = (byte)(value >> 24);
    buffer[offset + 1] = (byte)(value >> 16);
    buffer[offset + 2] = (byte)(value >> 8);
    buffer[offset + 3] = (byte)(value >> 0);
}

What this is doing in short is splitting the 32 bit integer into the component bytes using bitwise shifting and then storing those in big-endian (network byte) order into the array.

HashPasswordV3 then copies in the bytes from the salt and the password hash to create the final byte array which is returned to the main HashPassword method. This is converted to a base64 encoded string. This is the string is then written to the to the database.

And there we have it. That’s a fairly deep dive of how passwords are hashed inside ASP.NET Core 1.0.0 identity and then stored in the database.

Read More