.NET South East – May Event New speakers night.

It’s been a while since I last got around to blogging about a .NET South East meetup event. I wanted to make sure I did this month as the plan for this week’s event was close to my heart. I had the idea back in December / January to try and organise a night for new speakers. The motivation behind that was my own experience. I’ve blogged in detail about that but in case you don’t have a few hours spare to both parts, the TL;DR is as follows…

Through my life, I’ve always hated public speaking. The idea of giving a talk or even an introduction to a room of people filled me with a fair degree of dread. Last year I faced my fear by taking part in an ALT.NET show and tell evening. I had 20 minutes to share my experience of working on the Humanitarian Toolbox project. While terrifying beforehand, the actual experience was actually quite enjoyable. Afterwards, the rush and positive feedback was a great feeling. After that first talk, I ended up speaking at two other user group events and two conferences in 2018. While I still get nervous, I’ve found that I actually really enjoy presenting and this year I am extending the number of events I submit and speak at.

Back to the meetup; I wanted if possible to offer other members of the community a safe place for them to try out public speaking. I set aside a date and started promoting the idea to our members. My biggest concern was that no one would be interested and the idea would fail at stage one. However, after a couple of months of asking for speakers, people started to come forward. I know from experience that taking that initial step and committing is no small thing for novice speakers.

With the speakers lined up, I was feeling better about the idea. The next concern was attendee numbers. I was conscious that in order to attend the meetup it requires people to give up a free evening. As a result, people are selective on which meetups they choose to attend. Frankly, I wasn’t sure if the idea of an evening of new speakers with shorter talks would draw in attendees. That concern slipped away as I saw that our RSVP numbers on meetup.com were looking healthy. I was very pleased that members of the community were coming together to support our peers and to join us for the evening.

The Event

The evening began as normal with me introducing the meetup and then going onto present some news items. I wasn’t short of content for this event, holding it the week after MS Build! One slight hitch was that the slides I’d carefully crafted at home hadn’t synced on my OneDrive and as a result, with an hour to go I found myself rapidly recreating the content!

I won’t go into too much detail here besides sharing some links but the items we covered were…

.NET Core 3.0 roadmap announcement – The main goal of this release is to support WPF and WinForms workloads.
ASP.NET Core 2.1 RC1 released – With RTM due by end of May.
Visual Studio Live Share – Announcement of the public preview release.
Intellicode – AI assisted IntelliSense based on machine learning across 2,000 open source projects. Now available in experimental preview.
ML.NET – An open-source, cross-platform, machine learning library.
Visual Studio 15.7 released

With the news complete, we entered the main part of the evening. I’d allocated each speaker twenty minutes of presentation time, plus five minutes for questions.

First up was Dave Mateer who presented a demo-heavy talk. Presenting demos is always daunting but Dave’s went off without a hitch. Dave crammed a lot into his 20 minutes, demonstrating how he’d taken a legacy and poorly configured WordPress site and migrated it to running in the cloud on Azure AKS. Azure AKS is Microsoft’s managed Kubernetes service and by all accounts looks like a great way to get started with containers in production. Dave highlighted the salient things he’d learned along the way. Showing how data volumes can be used to persist data outside of the container for example. Dave also highlighted the power of scripting which means he can quickly spin up and when required delete his Kubernetes environments. Within his twenty minutes, Dave had taken us on a journey from running a container locally, through to a scripted deployment to production, with the site running live on Azure. It was a great talk and really interesting to learn from Dave’s experience.

Dave Mateer presents at .NET South East

Next up was Steve Collins who talked about a SOLID approach to ASP.NET Core. His focus was on configuration and what he’s learned as he began working with ASP.NET Core. Steve first explained a history of configuration in ASP.NET through the previous version. He brought us up to date by showing how ASP.NET Core out of the box provides DI friendly configuration and the options pattern for accessing your configuration values. One of the gotchas that Steve highlighted was the need to use the IOptions and IOptionsSnapshot interfaces to access strongly typed configuration in your dependent classes. This isn’t necessarily easily discoverable for newcomers to ASP.NET Core. Steve then showed how he’d built out a more intuitive pattern over the top that allows for accessing configuration via a bridge class. It was a great talk and Steve touched on some patterns I’m finding myself using on my latest projects. Steve has documented the content that formed this talk in his great four-part blog series which I recommend you go and check out.

Steve Collins presents at .NET South East

We then had a short break before our final talk of the evening. Alex McAuliffe took to the stage to talk about “Falling down holes for beginners”. In this talk, he shared his experience of working on and maintaining an open source project. Alex covered some of the positives of open source before also discussing some things to watch out for. One of the really salient points regarding running your own project was around scoping and focus. As Alex described it’s very easy to get excited about lots of areas of the code and also trying to make things “perfect” which can easily lose the focus on actually completing things. Alex also talked about an unexpected outcome of working on the project; burnout. It’s not always something you would consider when working on a personal project but is certainly something to watch out for. Alex also shared some thoughts on tools for source control and to help maintain code quality. Finally, he concluded with some resources to blogs and people to follow on Twitter. Alex’s slides are available online.

Alex McAuliffe presents at .NET South East

It was extremely impressed by all three talks. Dave, Steve and Alex had clearly put a lot of effort into preparing their slides, demos and content. Despite having a quite short time limit, each one packed a lot of great content into their allotted time. Having spoken to some of the attendees after the event I got the sense that everyone had enjoyed listening to the talks and the format of the evening in general. I want to repeat my thanks once again to the speakers for volunteering and taking part. I know that standing up in front of a crowd can be intimidating and I hope that all three enjoyed the experience overall.

ASP.NET Core Dependency Injection – Registering Implementations Using Delegates Using the delegate implementation factory extension methods with Microsoft.Extensions.DependencyInjection

Today I want to continue my series (see links at the end of this post) of posts focusing on the Microsoft dependency injection library included by default in new ASP.NET Core projects. In this post I wanted to dive into another lesser known capability of the default DI library.

I was recently reviewing some code where the developer had brought in StructureMap as the container for an ASP.NET Core application. My colleague was using it in place of the Microsoft.Extensions.DependencyInjection package since he wanted to take advantage of some more advanced registrations. StructureMap extends the abstractions for DI provided by Microsoft and so can be easily swapped in as the DI container.

However, when reviewing the code I noticed that we could actually achieve the required registrations using the built in DI instead, removing the extra dependency on StructureMap. Also, from some performance benchmarks I’ve seen, I believe that the Microsoft DI is performing pretty well when compared to StructureMap. NOTE: I can’t seem to locate the source I’d previously read. If/when I track it down I’ll update this post with a link.

The exact example I was looking at is a bit too complicated to explain and demonstrate here. Instead I have put together a simpler example based loosely on the real-world case. In our application we were taking a dependency on an internal library that at this point we didn’t want to modify. One of the classes we wanted to use and resolve via DI looked a little like this…

We wanted to be able to register the IIndexNameProvider type, with the implementation returning this external WeekBasedElasticIndexNameProvider class. However, this presented a problem, since the constructor expects a string parameter defining the baseIndexName. We weren’t in a position to adjust this class and so my colleague had used StructureMap in order to register it as follows

In this registration the IElasticsearchConfig is loaded from DI and it’s IndexNamePrefix property is used to supply the baseIndexName required by the WeekBasedElasticIndexNameProvider. The IDateTimeProvider is also surfaced via DI and passed into the constructor for the WeekBasedElasticIndexNameProvider instance.

It turns out that the Microsoft DI is capable of a similar behaviour. There are overloads of the Add extension methods (e.g. AddSingleton) which accept a Func<IServiceProvider, TService> as the implementation factory. This delegate is expected to return an instance of a suitable object for the registration. To achieve the above registration using the Microsoft DI we can use this code to register our service…

The delegate that this extension method takes as a parameter expects us to return an implementation of the IIndexNameProvider interface. It expects an IIndexNameProvider since that is what has been defined in the generic parameter for the AddSingleton method. In the delegate we get access to an IServiceProvider which we can use to access other registered services.

This delegate will be called the first time that an instance of IIndexNameProvider is required as a dependency on any of our other services registered in the DI container. At that point, the delegate is executed; retrieving the IElasticsearchConfig and IDateTimeProvider from the IServiceProvider. It uses the IElasticsearchConfig to get the required IndexNamePrefix value.

Now that we have the IndexNamePrefix value and an IDateTimeProvider, we can construct and return an instance of the WeekBasedElasticIndexNameProvider which implements IIndexNameProvider. Using this pattern we were able to default back to the Microsoft DI.

It’s likely to be fairly infrequent that you’ll need to register services using delegates in this way, but should you need to do so it’s perfectly possible. Using this approach you can use other services registered in DI to configure or control the implementation that the DI container builds.

Other posts in this series

ASP.NET Core Dependency Injection – How to Register Generic Types
ASP.NET Core Dependency Injection – Registering Multiple Implementations of an Interface

Contributing to the Microsoft ASP.NET Documentation My experience of writing for docs.microsoft.com

Back in February I spotted an issue on the ASP.NET Core Docs repository. The issue was a requirement for new documentation about the IHttpClientFactory feature being added in ASP.NET Core 2.1.

I’d been following the work on IHttpClientFactory for a while and had written a couple of posts about the functionality based on the nightly builds. I’d been keen to try my hand at helping with the docs.microsoft.com site for a while and this seemed like a great chance to contribute. In this post I’ll wanted to share my experience.

What is docs.microsoft.com?

Before I go further I should explain what docs.microsoft.com is. It’s Microsoft’s new (introduced in 2016) documentation portal, built from the ground up as a replacement for a number of prior documentation sites such as TechNet and MSDN. I believe it started with documentation for .NET Core / ASP.NET Core and has quickly grown to include documentation for many other Microsoft products and services.

One of the exciting and interesting changes is that many of the product teams have made their documentation open source. This allows members of the community to submit content and correct any mistakes they find. With more rapidly evolving products, keeping up is a real challenge. Allowing the community to assist is a great move and I believe has really improved the quality and usefulness of the content.

Additionally, an amazing team of managers and content writers has been assembled at Microsoft to build and maintain the improved documentation.

Getting Involved

So, back to my experience!

The first step was to show an interest and reach out to offer my contribution. I did this by commenting on the GitHub issue for the feature.

Scott Addie, a senior content developer at Microsoft who works on the ASP.NET docs team quickly responded to take up my offer. At this point we started to outline the content in collaboration with Ryan and Glenn from the ASP.NET team. You can follow that part of the story in the issue comments.

Once the plan and outline was agreed I was free to start work on the content.

Documentation

To get started, all I needed to do was to fork and clone the repository on GitHub. If those concepts are new to you, check out my YouTube playlist for some introductory videos on how that process works.

With the docs content on my machine I started a branch and begin outlining some content. All of the documentation is written in markdown. I prefer to use Visual Studio Code for markdown editing so I opened it up and created a document.

I began in the same was as I do for my blog posts, by outlining some initial sections and jotting down some notes. I like to organise my plan early on as it allows me to figure out a flow for the content and how I will build up information. Once the outline was in place I started to flesh out content. Often this required me to research the feature by trying out samples and reading the source code. Writing documentation (and blog posts) is a great way to learn more about a product or feature as you really look more closely than you might normally.

I made an initial pull request to the docs repository so that I could show the team my work and get feedback as I went. Being new to contributing I wanted to make sure I was on the right track. Scott was quickly on the case and offered some valuable feedback.

One of the big differences between writing for my blog and writing for the documentation is the style. In my blog I’m often referring to my personal experience and sharing with readers in a conversational style. For the docs the style is less personal and more concise. When reading documentation, users are often looking to get going quickly so producing tight, clear content is important. At first I found the switch in styles a little difficult, but after a while I was having to correct myself less often.

From there the work continued whenever I could get time to spend on the documentation. I was making contributions in the evenings, weekends and during my lunch breaks at work.

Building a Sample Application

One of the other requirements for good documentation is a sample application. These are often made available in the docs repository and are intended to provide a quick start for developers. Scott asked if it would be possible for me to produce the sample for the feature. The sample would also be used to support code snippets in the documentation. This is a really clever part of the docfx functionality which allows code snippets to be referenced using a special syntax in the markdown file. The code from the sample is then pulled into the documentation by a build process and stays up to date as the sample changes. The syntax even supports highlighting specific lines in the code for clarity.

Building the sample was a little complicated. We were just into the preview 1 of ASP.NET Core 2.1 at this stage, but some of the features of IHttpClientFactory hadn’t made it into that first preview. I was using nightly builds to experiment with the latest available functionality, but getting this working in a sample application proved complicated. I needed to match up the nightly SDK and ASP.NET Core builds to get things building and at times it would simply not build at all!

In the end we agreed that I would wait for the preview 2 release before putting too much time into the sample. Once preview 2 landed I was able to pick it up and build out the code.

Working out how to demonstrate things simply and accurately, but using code that would be suitable if coped and pasted was a careful balance. IHttpClientFactory is made up of a lot of extension method calls to configure the client instances. I found that before long I had a long list for the various options in my Startup class. Breaking these into snippet regions meant I could include the relevant samples in the documentation and break up the method a little.

During my trip to Seattle for my first ever MVP summit I was lucky enough to meeting Ryan Nowak, Glenn Condron and Damian Edwards from the ASP.NET team to chat about the feature and the documentation. During the MVP hack day I spent a bit of time refining the code sample and even got a live code review on one part of it from Glenn and Damian! Thanks guys!

Review and Refinement

Over a period of about 2 months I worked on the documentation and the sample application. It took longer than I expected mostly due to keeping up with the changes and fitting it into my free time. Finally though, this week I was pretty much done and ready for a review by the team.

A few changes were suggested including fixing up some grammar and spelling as well as some feedback on preferred styles and idioms for the sample. It was very valuable feedback and has turned my content into what I hope is some useful documentation.

You can follow the history and comments on the pull request.

Summary

I want to conclude this post with a huge thank you to the team at Microsoft. I was little nervous when I first started out, but was quickly put at ease by everyone I worked with. Scott in particular was very patient and helpful. Luke Latham also offered some great feedback to tidy up the content and language of the documentation.

Glenn and Ryan helped with some questions about the IHttpClientFactory feature as well as offering advice for the sample.

Thanks also goes out to Dylan from the App vNext team who maintain Polly who helped with clarifying some of the Polly functionality and offered some great last minute suggestions to tidy up a few sentences. Having so many people cast an eye over the content really helped tune it and make it clear and concise.

I’ve really enjoyed the collaboration and I have personally learned a lot along the way. It’s a great feeling to have contributed to something which I hope will help many developers start using IHttpClientFactory in their code.

If you’re thinking that this journey sounds fun, I recommend you check out the contributing guide and take a look for an issue you can help with. Even small fixes and corrections are most welcome and the team are a great bunch of people to work with.

ASP.NET Core Dependency Injection – Registering Multiple Implementations of an Interface

In a previous post I covered registering generic types with Dependency Injection. This is one of the less common (and less documented) ways in which services could be registered with the Microsoft DI library. It turns out that you can do more with the DI available in the Microsoft.Extensions.DependencyInjection package than it may first appear.

Another “advanced” pattern that can be achieved is to register multiple concrete implementations for an interface. These can later be injected as an IEnumerable of that interface. In this post we’ll explore a quick example of how we can do that.

Let’s first discuss when and why you might want to do this. The example I have is based on a service I’ve been involved with previously. This service is responsible for reading messages from a Amazon SQS queue, enriching them and then saving them to ElasticSearch. Based on the property values in the message it conditionally enriches the data. Initially we only had a couple of possible enrichers, but over time we’ve added more.

The way we decided to implement this was to define an IEnricher interface. That interface looks a little like this:

There are two methods on the interface. The first is called CanEnrich and this will take the message object and determine if this enricher can do enrichment on the message. The second method, Enrich, then performs the actual enrichment of the message.

We can define zero or more implementations of this interface for the different enriching activities we may want to perform.

Here’s example of an enricher:

This enricher tries to lookup the city for any failed login messages coming through our queue. It can only do this if the IP Address is present.

And here’s another :

This enricher populates a DayOfWeek property, which is then used for aggregating in ElasticSearch. It can only do this if the incoming message contains a Date.

Both of are quite basic and contrived examples. The functionality isn’t that important here though.

The enrichers can now be registered with the ServiceCollection wherever that happens in your application:

It’s worth making it clear that the implementations will be added in the order they are registered. They will be returned in that same order when injected into calling code. Depending on your requirements, this may be useful and important. For this example we don’t really care what order we get them in.

To make use of these enrichers we can have them injected wherever we require them and DI is available. Since we’ve registered more than one instance, we ask the DI framework for an IEnumerable<IEnricher> which we can then enumerate over to access all implementations.

A simplified example of this in a caller would look like this:

Here I filter to only the enrichers which can enrich the message we are processing. Then I call the Enrich method on each one in turn.

Where this pattern proves particularly useful is if we imagine we now want to add another enricher. All we have to do is create a class which implements the interface and then ensure it is registered with DI. Now when our caller runs, that enricher will be included in the IEnumerable<IEnricher> which is injected. Our consuming code will make use of the new enricher without any further code changes.

The ASP.NET Core framework uses this same pattern in a number of places. One which I’ve covered in the past is the IHostedService interface. This allows you to define one or more “background” services to run whilst you application is alive. As with my enricher example, all you have to do is create a class implementing IHostedService and then register it with DI. When the application starts it will fire up any registered IHostedService instances in order.

Other posts in this series

ASP.NET Core Dependency Injection – How to Register Generic Types
ASP.NET Core Dependency Injection – Registering Implementations Using Delegates