ASP.NET Core Gotchas – No. 2 Some settings from launchSettings.json are now loaded by dotnet run

Following on from my first gotcha that we hit yesterday, here is another one which caught us out at the same time.

TL:DR; When using dotnet run (.NET Core 2.0), expect it to set some properties (including the port and environment) from launchSettings.json if it’s included in the Properties folder!

The issue we faced was that when running a new API using ASP.NET Core 2.0 inside a container, it was starting on the wrong port; not port 80 as we’d expected. The scenario where this can show itself is unlikely to be a common one, so hopefully only a few people will run into this exact gotcha.

In our case, prototyping the new API needed some front end involvement so we added a quick and dirty dockerfile and docker-compose.yml file to enable it to be spun up inside a container. Normally our production containers are all built on images which are based on the ASP.NET Core runtime image. We then copy in our published dlls and run the application inside the container. However, in this particular case we’d cheated a little and were using the SDK based image to allow us to build and then run the source inside a container.

In our case our dockerfile looked like this (note: do not use this in production!)

FROM microsoft/aspnetcore-build:2.0

ENV DOTNET_SKIP_FIRST_TIME_EXPERIENCE true

WORKDIR /app

COPY . .

RUN dotnet restore --verbosity quiet && dotnet build -c Release --verbosity quiet

EXPOSE 80

WORKDIR /app/src/OurNewApi

ENTRYPOINT dotnet run

The details of what it does are not too important. If you’re interested your can read my Docker for .NET Developers series to learn more. What is important here is that we are copying the entire solution folder contents into the container and later using dotnet run to start it. Seems safe enough – right?

As stated earlier, we noticed that for some reason, instead of using the default ASP.NET Core environment variable (ASPNETCORE_URLS=”https://*:80) which defines the default port of 80 it was using a different port. We also noticed that the environment was showing as “Development” and not “Production”.

We then examined the console output and noticed the following information:

Using launch settings from C:\Projects\OurApi\OurApi\Properties\launchSettings.json…

We checked and indeed, the port being used was the one defined in launchSettings.json. This was a bit of a surprise since in the past that file has only been used by Visual Studio. We scratched our heads as we’d previously done something similar with an ASP.NET Core 1.0 project without hitting any issues. I started investigating and soon found a closed GitHub issue titled “Add support for launchSettings.json to dotnet run”. Reading through it, it seems that since 2.0, dotnet run will load up some settings from launchSettings.json if it finds one. This makes some sense from a developer tooling point of view as I guess there are cases where it could be useful. In our case, the fact we were blindly copying in our entire solution folder (including the launchSettings.json file) as well as starting our API using dotnet run, meant that we experienced this behaviour inside the container. It’s not something we normally face, but in this quick prototype, it showed itself.

The quick solution in our case was to include the Properties folder in our dockerignore file which specifies any folders/files that you do not want included in the build context. This then avoids them being copied into your image using the COPY command.

For the curious among you, this functionality is implemented inside ProjectLaunchSettingsProvider.cs in the .NET CLI repository. If the file is found then the properties from the environmentVariables section are used as well as the value of the applicationUrl property.

Long story short, be aware of the fact that since .NET 2.0 the “dotnet run” CLI command will look for and use a launchSettings.json file if it’s available. If you suspect this may be happening in your case you can check the console output to see if it has loaded from launchSettings.json.

ASP.NET Core Gotchas – No. 1 Using Environment Variables with ASP.NET Core 2.0 and Linux

We’ve been using ASP.NET Core 1.0 for some time as part of a microservices architecture we’ve developed. We run the services in production as Docker containers on Amazon ECS. We recently created a new API based on ASP.NET Core 2.0 and ran into some issues with configuration.

The first of the two issues we encountered is in cases where we use environment variables in our Docker containers that we expect to override ASP.NET Core configuration values. The ASP.NET Core configuration system allows many sources for configuration values to be defined. This includes loading from json files and environment variables. When loading configuration, each of the providers is checked in turn for configuration values. Any which define the same key as a previous configuration item are overridden with the new value. This works nicely as we can define common configuration in JSON files and optionally override this in production using environment variables.

This is exactly how we run our APIs currently. In ASP.NET Core 1.0 we could pass in environment variables to containers (in our case, using docker-compose files locally and AWS ECS TaskDefinitions in production). Configuration in ASP.NET Core supports a hierarchy of settings which allows us to define “sets” of values. For example, in our case we have a top level section called DistributedCacheConfig and within that there are three settings to control various things all related to caching.

When overriding these settings using environment variables we previously used the colon separator to define the layer of the hierarchy the value targets. One such environment variable would look like this…

DistributedCacheConfig:Enabled=true

When read and mapped to the ASP.NET Core configuration system this would override the enabled value for the DistributedCacheConfig even if a previous JSON file had set it to false. This even worked when deployed to production where we could configure AWS to start our containers with the necessary environment variables when launching new instances.

When setting up a new API using the latest ASP.NET Core 2.0 version we noticed an issue when deploying to AWS. The settings we had defined in the TaskDefinition (which controls the environment variables containers start with) were not being applied and the settings from the JSON files were still being used. We then tested this locally by starting up a container from the ASP.NET Core 2.0 docker image and again noted that the environment variables were not overriding the JSON values as expected.

I spent some time investigating this and one item I was able to find this GitHub issue for the Configuration repository which mentions the possibility to use a double underscore(__) as the separator between the layers on Linux. I made the change to our environment definition and immediately it was working again. I’d personally never been aware of the option to use this separator.

With the problem hopefully solved I set about investigating what had changed. Certainly the colon separator was working fine on our older projects. Finally I noticed that by default the ASP.NET Core 2.0 Docker image returned when asking for the “2.0” tag is based on Debian Stretch. With ASP.NET Core 1.x it was based on the older Debian Jessie version. I started to wonder if this might explain the change in behaviour. I quickly modified the dockerfile we were using to target the “2.0-jessie” tag, changing the environment variable back to the colon separated version as well. When I ran that as a container, the value was once again set using the environment variable as expected.

My guess (although I’ve not dug any deeper) is that between the two Debian versions, something has changed in how the colon separator is handled for environment variables. To validate this assumption I modified my application to spit out the environment variables at startup.

When running on Stretch – Environment.GetEnvironmentVariables() returns the following console output:

web_1 | key = HOME - value = /root
web_1 | key = TestSetting101 - value = Something
web_1 | key = ASPNETCORE_PKG_VERSION - value = 2.0.3
web_1 | key = NODE_VERSION - value = 6.11.3
web_1 | key = DOTNET_SDK_DOWNLOAD_SHA - value = 74A0741D4261D6769F29A5F1BA3E8FF44C79F17BBFED5E240C59C0AA104F92E93F5E76B1A262BDFAB3769F3366E33EA47603D9D725617A75CAD839274EBC5F2B
web_1 | key = NUGET_XMLDOC_MODE - value = skip
web_1 | key = PWD - value = /app/TestingConfiguration
web_1 | key = DOTNET_SKIP_FIRST_TIME_EXPERIENCE - value = true
web_1 | key = ASPNETCORE_URLS - value = http://+:80
web_1 | key = HOSTNAME - value = 44d9a86fba25
web_1 | key = DOTNET_SDK_DOWNLOAD_URL - value = https://dotnetcli.blob.core.windows.net/dotnet/Sdk/2.0.3/dotnet-sdk-2.0.3-linux-x64.tar.gz
web_1 | key = PATH - value = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
web_1 | key = DOTNET_SDK_VERSION - value = 2.0.3

When running on Jessie – Environment.GetEnvironmentVariables() returns

web_1 | key = PWD - value = /app/TestingConfiguration
web_1 | key = TestSetting101 - value = Something
web_1 | key = DOTNET_SDK_VERSION - value = 2.0.3
web_1 | key = PATH - value = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
web_1 | key = NUGET_XMLDOC_MODE - value = skip
web_1 | key = DOTNET_SDK_DOWNLOAD_SHA - value = 74A0741D4261D6769F29A5F1BA3E8FF44C79F17BBFED5E240C59C0AA104F92E93F5E76B1A262BDFAB3769F3366E33EA47603D9D725617A75CAD839274EBC5F2B
web_1 | key = MySettings:Setting1 - value = From_DockerCompose
web_1 | key = HOME - value = /root
web_1 | key = ASPNETCORE_URLS - value = http://+:80
web_1 | key = HOSTNAME - value = 0262ce3069ae
web_1 | key = ASPNETCORE_PKG_VERSION - value = 2.0.3
web_1 | key = DOTNET_SDK_DOWNLOAD_URL - value = https://dotnetcli.blob.core.windows.net/dotnet/Sdk/2.0.3/dotnet-sdk-2.0.3-linux-x64.tar.gz
web_1 | key = DOTNET_SKIP_FIRST_TIME_EXPERIENCE - value = true
web_1 | key = NODE_VERSION - value = 6.11.3

You can see here that on Stretch the variable with the key MySettings:Setting1 is not even returned. So this explains why it’s not available in the configuration.

While I’d like to know what actually changed that affected the behaviour between the two OS versions, I’ll have to leave that as a mystery. However, the advice here is that if you plan to run on Linux, it’s probably safest to use the double underscore separator when defining any environment variables which works on either OS. Perhaps we were simply lucky in 1.0 that it worked and we should have been using double underscore all along.

MVP Logo

My Microsoft MVP Journey Becoming a Visual Studio and Developer Technologies MVP

On November 1st I got a very nice surprise in my inbox from Microsoft. I was being awarded an MVP in the ‘Visual Studio and Development Technologies’ category. This is an achievement that I am extremely proud of and very excited about. It’s an honour and honestly quite daunting to join such an amazing group of talented community leaders and experts.

A few people have since asked me about the process of becoming and MVP so I thought it might be helpful to share my experience here.

My Background

Before diving into the specifics of the MVP process I thought it would be worth giving a little background of my career. I’ll try to keep this short by skipping to the salient points, however I think it’s helpful to share a little of my background, which has not always been in development.

I’ve worked in the IT industry since around 2000/2001 when I got my first job as a desktop technician for the NHS in my college (high school) holidays. Even before that I was “into computers”, building my own PC from components ordered online and dabbling in programming with QuickBasic and later VB6. I continued my IT career with a job as a desktop and server engineer for a manufacturing company. Whilst doing that role I suggested building an intranet for some of the internal documentation. I had some limited self-taught experience with HTML at the time so I learned what I could from books and online. Later I taught myself classic ASP in order to build a spare parts ordering system for use within the manufacturing company and its partners. Then ASP.NET was released and I started playing around and learning to work with that from books and online articles. Around the ASP.NET 2.0 timeframe my employer needed to improve an in-house Access database system for managing hydraulic test results for their products. I rebuilt that system using SQL Server with a web (WebForms) frontend. I think at this point I was still using VB.NET!

Later my role was outsourced and I was transferred to work for the outsource provider. There I continued as a desktop and server engineer but due to the stricter role boundaries I was no longer able to perform any development work. I kept my skills up with my own side projects, focusing on the current releases of ASP.NET at the time and starting to teach myself C#. I also took on a little bit of side work producing a bespoke website and CRM for a consulting firm in my own time.

I developed my career at the outsource provider over the years there, becoming a UK engineer lead and eventually moving into a role as a Service Delivery Executive, managing support in Europe and Africa for one of our large customers. This was a move away from the technical aspects and into management and customer relations. I enjoyed the challenges but as time went on the resources were stretched and the role became extremely stressful and unrewarding. I stopped enjoying what I was doing and found the 50+ hour work week was absorbing any spare time I had to relax and dabble with development.

I decided to make a change. I realised I was enjoying the experience of building sites with ASP.NET and learning as much as possible and so I applied for a few developer roles. I was very conscious that I had no idea how my self-taught developer skills would stack up in the real world. However I interviewed for Madgex (my current employer) who clearly found my answers suitable as I was offered a job there! I started at the level of “developer”, so between junior and senior. This reflected fairly my skill level at the time. I immediately loved the role and found my days were far more enjoyable. I was no longer stressed and was able to enjoy having some proper free time outside of work. It was a change in direction that I am very pleased I made.

I have now worked at Madgex for over 2 years and in that time have been constantly learning. For sure there was some elements of my developer skill set that were lacking, having been self-taught, but I quickly worked to fill those gaps in my knowledge. The team I joined was amazing and I learned so much from the experienced developers there. In my personal time I spent a lot of the free time I had learning more, watching videos and keeping up with the developments around the new ASP.NET Core framework. About a year or so ago I was lucky enough to be able to put that into practice as we started developing a new product at Madgex. I’ve since been promoted to senior developer and within the last month taken on some additional responsibilities as a developer community lead.

Nomination

The journey to becoming an MVP begins with a nomination at https://mvp.microsoft.com/. In my case I received my first nomination in January 2017 from MVP James Chambers whom I had worked with quite closely on the Humanitarian Toolbox allReady project. James was aware of my blog and activities contributing to OSS and felt I should be put forward for the MVP.

As a nominee, the first you hear about this is via an email from Microsoft, stating that you’ve been nominated. You are given a link to complete a profile in their system.

Recent contributions

As part of the profile completion process you are asked to provide a detailed list of your community contributions. This includes things like any events you have spoken at, blogging, videos you’ve shared and OSS contributions. Retrospectively working out what I’d done and when was no easy task. Had I been expecting a nomination it would have been helpful for me to have kept a record for the last year of activities. If you’re involved in community and hope to one day be nominated I would recommend that you keep a list of key dates for your community activities.

Once you have completed your profile and contributions you submit the form and get a confirmation email that it will go into a review process at Microsoft.

The long wait

In my case, after completing my profile it was a long wait before anything happened, many months in fact. During that time I assumed that the nomination and profile must be under scrutiny within Microsoft. Every now and again I went in to update the contributions against my profile. The process is a bit of a black box and I’m not really sure if my nomination was forgotten about or that it simply takes some time to get a response.

In August I happened to be at a Microsoft community event being run by the the UK Community Program Manager / MVP Lead, Claire Smyth. Since I had the opportunity I asked about the MVP process and mentioned my nomination. We followed up by email and Claire was able to locate the nomination in the system. At this point we arranged a brief call to discuss the MVP along with support Microsoft were offering to new user groups. I happened to have just setup my new user group, .NET South East at this time so Claire had kindly offered me some support and advice.

This was a useful chat and Claire was able to explain a little more about the things that the MVP team are looking for in nominees. It’s mostly that there is a varied and comprehensive range of contributions that show a positive community benefit. I was able to discuss my community activities in a little more detail. Claire also explained that after the nomination is approved it usually also goes to the respective product teams for them to review the contributions and give their feedback. In my case I’d had a little contact with some of the ASP.NET team and I let Claire know some of the people I’d previously interacted with.

At the end of the call Claire confirmed she would look over the information and see if the nomination should go forward.

Further nominations

During the weeks after our call I was fortunate enough to receive two further nominations for the MVP award, including one from Jon Galloway at Microsoft. This last one I feel had particular weight in tipping me over to get the award. Since the review process includes some input from the product teams as to which nominees are ready to the award, having a nomination from someone on the ASP.NET team was likely very helpful! If you receive additional nominations you seem to get a whole new profile to be completed. I dropped an email to Claire who I believe was able to link the nominations together in the system. Just to be sure, I completed the profile on the most recent nomination link. By this point I’d done some more talks so I was able to add some extra contributions to my list.

After this, things went quiet again while the process continued behind the scenes. It was a busy period and I forgot mostly about the nominations for a while. Then, on the 1st of November after an evening meal with some friends I happened to glance at my phone when heading to the car. I had a few emails and I scanned over the list. One particular subject line jumped out at me – “Congratulations 2018-2019 Microsoft MVP!” My excitement mounted further as I opened the email and read the first paragraph!

MVP Award Email

I was honestly shocked and very excited to read this email. I immediately showed it to my wife who was also very excited. I’d explained the MVP and mentioned my nominations for it to her earlier in the year and she was aware of what a big thing it was to be awarded the MVP.

What do you get?

In the congratulations email you are sent details of how to start accessing some of your MVP benefits via the MVP site. There’s a lot to take in. I started by ensuring my MVP profile was correct so that it would appear on the MVP site. I then looked at things such as the MSDN Visual Studio Enterprise subscription which is a very handy subscription to get! After agreeing to a Microsoft NDA you are also able to access special mailing lists that include members of the product teams. I joined the ASP.NET mailing list and the Azure one as those are most relevant to what I do day to day. You can also sign up for a Microsoft Yammer group that gives you access to chat with other MVPs and Microsoft personnel. There’s still lots of things I need to look into as MVP have access to various licenses and products as part of their benefit package. For me though, the access to the wider product teams and fellow MVPs is one of the best benefits.

As well as the access and licenses mentioned above the other thing you can expect as a new MVP is an award pack posted to you from Microsoft. I got an email stating that mine had been posted and would be with me in about one week. I’d seen photographs of the award pack from other MVPs via Twitter so knew roughly what to expect. Even so, on the day of its arrival to my home it was exciting to unbox the contents. Inside the award pack you get a very solid physical trophy made of glass. It’s much more substantial than I’d expected and looks really great. You also get a certificate, MVP ID card, lapel pin and some MVP stickers. It’s a really nice pack and it’s nice to have something physical to represent the award.

Microsoft MVP Award Pack

MVP Summit

The other very exciting opportunity for MVPs is the chance to register to attend the MVP Global Summit which is run annually out of the Microsoft campus in Redmond, Seattle. My MVP award was perfectly timed, just before the registration for this event opened. On the day of registration I was eager to sign up to attend. It’s a hugely valuable opportunity to visit the Microsoft campus and to meet and interact with the Microsoft product teams directly. The registration process itself was a bit of a nightmare as despite jumping on the minute it opened, the system was clearly unable to handle the load and was crashing. I spent nearly two hours trying to complete the hotel registration phase. During that time a number of MVPs were experiencing problems and tweeting about the issue. One MVP provided a couple of phone numbers we could try to get support. Not surprisingly the phone system also struggled under the load but eventually I was able to get through to someone. They were very helpful and in very little time managed to get me registered for the event and the hotel of my choice. I’m looking forward to my first trip to the US and the chance to meet many of the MVPs and Microsoft staff I follow on Twitter.

Next Steps

Being awarded the MVP is a very exciting development in my career. Over the last two years I have been very focused on blogging and sharing as much as I can with the amazing developer community. I learn huge amounts from other community contributors and it’s wonderful to play my small part in that and be recognised with this prestigious award for that work. It’s a little intimidating too, to be joining such an expert group of fellow MVPs. I can’t help but feel a degree of impostor syndrome at the thought. I hope to live up to the award by continuing to contribute within the community. I want to keep up blogging, speaking, running my user group and OSS contributions as much as possible. There are a few other projects and ideas that I hope to find the time to do also that I hope will contribute further to the ASP.NET community in particular. I owe a huge thanks to the community and various specific people along my journey who have supported what I’m doing, offered advice and nominated me for this award. A big thanks to everyone who has helped me along the way!

.NET South East November 2017 Meetup With guest Michael Newton

Last night we held our November .NET South East meetup at Madgex HQ with the amazing Michael Newton speaking. Here’s a brief summary…

Intro and news

At 7pm I opened the evening with my introduction, including thanking our fantastic sponsors. I then went on to discuss some of the news items I had gathered. I was a bit short on details for the item’s I’d picked due to being out sick (cough) for the week prior to the event. I managed to put together notes on two headlines an hour or two before the event started.

Visual Studio Live Share

This first item was announced at the recent Microsoft Connect() event in New York. It’s an early announcement of a new feature being worked on for Visual Studio 2017 and VS Code.

The idea with Live Share is that teams of developers will be able to interactively collaborate on code directly from their editor/IDE. A developer can start a shared session on their machine which will generate a link that provides access to a shared, private workspace. They can send this link to their collaborators who will then be able to connect and see the code for your workspace. There is no need to explicitly clone the code or install any dependencies.

Once connected your partner will be able to view your code and even see where your cursor is. You can select code and that will reflect in your partners editor. Any changes being made will be visible to both parties.

Taking it a step further, you can even share a debug session that your partner can participate with. Your partner will be attached and will see a live stack trace, locals values and can even hover over variables to see their current value.

This looks like a really exciting way to enable remote pair programming and perhaps as a way for people to assist contributors of open source projects for example. No release date has been given but it’s certainly one to watch.

https://code.visualstudio.com/blogs/2017/11/15/live-share

https://channel9.msdn.com/Events/Connect/T254

Nullable Reference Types in C# 8.0

This item came from a recent MSDN blog post stating that a trial version of the planned C# 8.0 feature to provide non-nullable reference types is now available. Microsoft want to capture feedback on the current proposed design of this language feature to help finalise on something that works well for the developer community.

The ideas being proposed are to add the concept of non-nullable reference types to the language with a view to removing bugs and making it easier to express intent in the code. Nulls should not exist except in cases where the domain design makes them reasonable.

In C# 8.0 the plan is to introduce this and as a result, all current reference types will be assumed to be non-nullable as the default. Any cases where a reference type is assigned a null will then be marked as warnings by the compiler.

To enable developers to use nullable reference types, we’ll also be able to add a question mark (?) as we can currently with value types to identify any reference types which will then be considered nullable. In those cases it will likely result in many warnings from the compiler where null checks are missing before dereferencing those objects. This is expected to help catch and resolve potential bugs that could result in NullReferenceExceptions being thrown in your code.

The compiler warnings can be disabled if you do not which to be warned of these issues and the existing IL code produced by the compiler will not change.

You can read more about this announcement here: https://blogs.msdn.microsoft.com/dotnet/2017/11/15/nullable-reference-types-in-csharp/

Michael Newton – Making Distributed Systems in .NET Easier

Michael Newton presenting at .NET South East

This month we were joined by local consultant and trainer, Michael Newton. Michael started by talking a little about good API design. He explored a few examples where the API design is critical to providing a clear way to work with a library. Just as importantly these APIs need to aim to help protect the user from making bad / incorrect decisions.

He cited examples of the NodaTime API which has a quite complex API surface but is also tackling a quite complex problem. However it does ensure that it’s hard to shoot yourself in the foot as one example showed. In that case you’d have to ignore the fact that you had made calls to 3 methods you did not understand in order to get things wrong. This API therefore forces a reasonable level of understanding of the problem before you can use it. Once you do have that understanding, it makes sense and gives fine control of the intent you have when working with the complexities of date and time across time zones.

Michael went on to show us EasyNetQ, a simple .NET library for working with RabbitMQ. This library was originally conceived by Mike Hadlow when he was working for 15Below in Brighton. The original .NET library from RabbitMq was extremely complex and required lots of knowledge before you could consume it. The EasyNetQ library sits on top of that and abstracts away the complexity with a much more simplified API. This comes at the cost of some functionality, but for the majority of general use cases provides everything people need. It favours simplicity over a complex feature set.

EasyNetQ worked well for general scenarios but as the need for more complex workflows began to appear it was clear that more was needed. This led to the development of an EasyNetQProcessManager library by Michael while working for 15Below. This came after Michael read Enterprise Integration Patterns by Gregor Hohpe and‎ Bobby Woolf. Michael set about designing a library to sit over EasyNetQ.

November audience for Michael Newton at .NET South East

It was interesting to hear these thoughts and to set the scene for next section of the talk where Michael shared his alpha code for a new version of a process manager called RouteMaster. Here Michael is working to develop a slick API that helps protect the user of the library from making mistakes, where possible making use of the .NET type system to do so. It’s very functional in nature and the main code is written in F# with a C# API having been completed in the afternoon prior to this meetup (so very, very alpha)! This library aims to make developing distributed workflows a relatively simple affair.

Along the way Michael shared some useful tools and info. For example he showed us briefly that he uses a Docker image for a product called adminer to provide a management UI over the underlying Postgres database. I’ll be checking this one out for sure. Another piece of advice was using FsCheck for property based testing and how this can be a great way to fully test your code using many randomly generated inputs.

It was a really interesting evening so a huge thanks to Michael for presenting his content to us.

Prize Draws

With the end of the evening closing in, before heading off to the pub we drew the winners of the prizes from our fantastic sponsors for the event. The prizes we had to offer were:

JetBrains – One year individual subscription to any single JetBrains Toolbox product

Manning – Free eBook

elmah.io – 6 month Small Business license

PostSharp – License to PostSharp

Again I used the WPF app created by Dan Clarke, who organises the .NET Oxford meetup. This time however I trialed pre-drawing names just before the event started to speed up the process of prize winner selection. The rules as with the last event were:

a) names are added from the RSVP list (as at about 1-2 hours before the event)
b) if the name drawn is not in attendance, we redraw.

Next events

We have some great speakers lined up for 2018. Our December meetup is currently under consideration. Our planned speaker has unfortunately had to withdraw so I’ll be assessing fall-back options.

For January I am working on a plan which is a long shot but will be announced if I can align a few puzzle pieces in time.

February 2018 – Ian Cooper
https://www.meetup.com/dotnetsoutheast/events/244109542/

March 2018 – Joe Stead
https://www.meetup.com/dotnetsoutheast/events/244109560/

April 2018 – Jon Smith
https://www.meetup.com/dotnetsoutheast/events/245021956/

Links

http://mavnn.co.uk/
https://nodatime.org/
http://easynetq.com/
http://www.enterpriseintegrationpatterns.com/
https://hub.docker.com/_/adminer/
https://github.com/fscheck/FsCheck
https://blog.mavnn.co.uk/easynetq-process-management/
https://github.com/RouteMasterIntegration/RouteMaster

.NET South East October 2017 Meetup With guest Rabeb Othmani

Last night we held our third .NET South East meetup at Madgex HQ with special guest Rabeb Othmani. Here’s a brief summary of the evening…

Preparation

Planning for this meeting felt a lot easier than past events as I have built a few lists of things to do and have a bit of experience with setting everything up. As usual I did a bit of marketing for the event via Twitter, hoping to spread the word.

On the evening I finished work around 4:30pm to begin setting up the room and preparing things like the snacks and drinks. We have a pretty well-oiled process now and with the help of Ricky our IT guru we had the room prepared in about 30 minutes. The plan for the evening was to stick to the process we’d developed at our prior meetup.

We had our attendees sign in down in the foyer with two volunteers, Chris and Jenny very kindly helping to do that this month. Again we placed our food and drinks networking area in our main reception space so that there was more space for people to chat and socialise. Toby and Sally; two more Madgexians kindly helped meet and greet people from the lifts. This month I was please as everything was ready well in advance and I was actually able to spend a bit of time greeting and speaking to people as they arrived. This was something I’d been unable to do at the prior events where I was running around getting the final things sorted.

In the end we had 20 attendees for the evening so a bit of a drop off from our first two. I had kind of expected this since the novelty has worn off for some. We did have some new faces though so it was nice to see more members finding their way to us.

Intro and news

At 7pm I opened the evening with my introduction, including thanking our fantastic sponsors and then went on to discuss some of the news items I had gathered for this month…

Quantum Computing

The first item I discussed was taken from the Ignite 2017 announcement that Microsoft are expecting to release a Quantum computing programming language by the end of this year. Microsoft are heavily invested in research around building a working quantum computing device and would like to start skilling up developers to work in the quantum world. The new language is yet unnamed (my guest is Q#!) and will include full Visual Studio integration, including a debugging experience. A local simulator will be available to simulate a 30 Qubit device or an Azure based 40 Qubit similar can be used. It’ll be interesting to watch how this develops as quantum computing could truly change the way we think about programming.

Microsoft Quantum Computing

Arstechnica Article on MS Quantum Computing

.NET 4.7.1 built in support for .NET Standard 2.0

A smaller news item but worth a quick mention, this story refers to the Microsoft announcement that 4.7.1 of the .NET Framework now includes all necessary files to consume .NET Standard 2.0 libraries. While 4.6.1 introduced compliance with the .NET Standard 2.0, it required some additional files to be deployed and in some cases binding redirects to be used.

.NET 4.7.1 built in support for .NET Standard 2.0 Announcement

UWP Supports .NET Standard 2.0

A related story was another Microsoft announcement that a major update for UWP means that it now supports .NET Standard 2.0. This introduces an additional ~20k APIs to the platform which developers can now take advantage of. It should also make sharing code between UWP and other platforms much easier. To use this update you need Visual Studio 15.4 and need to be targeting the Windows 10 Fall Creators Update.

UWP Supports .NET Standard 2.0 Announcement

Rabeb Othmani – Welcome to the age of conversational interfaces

Rabeb Othami speaking about conversational interfaces

Rabeb gave us a great talk that really set my mind off thinking about building bots! She talked about the coming of age of conversational interfaces via devices like Google Echo, Amazon Alexa and our smart phones.

She described the history of the changing development landscape as users move to consuming on smaller devices and via different interfaces. We moved from mouse on desktop devices, to touch on tablets and smart phones and we’re now entering the age of voice communication where we may never physically interact with the device at all.

Recent advances in technologies such as AI and machine learning are enabling us to develop more intelligent applications while improvements in voice recognition, language interpretation and text to speech have also driven the industry forward and moved us towards more and more voice based interfaces.

Digital assistants such as Google, Siri and Cortana understand more about the context in which we are operating and can tailor responses and information to our needs.

Voice as an interface is becoming popular in part due to its convenience and speed. With text we need to locate our device, unlock it, access an app, type data and wait for a response. With voice, we can very quickly interact without any need to physically hold the device. We can interact on the move or in situations such as in the car when our hands are not free to use a device. Voice can be very simple when done right as there are no UI issues in the traditional sense. However the application/device must be able to understand and interpret the intent of the user.

Rabeb listed some key point to consider when building voice based interfaces:

  • Make it smart
  • Use language users can understand
  • The capabilities of your tech
  • The structure of the info – For example dates; e.g. should you infer a year if the user doesn’t say one?

When building for devices like Alexa you build skills which are a unit of conversational intelligence. You must register the skill to be able to use it from your device. Skills invoke a bot in the cloud which does the processing for your application. Rabeb demoed the Microsoft bot SDK in Visual Studio and a simple bot which would call her phone using the Nexmo APIs.

Rabeb Othamni at .NET South East

It was a great introduction to the world of bots and voice interfaces. I have been inspired to add it to my list of things to try and I hope a few others will do the same. This is exactly why I believe user groups are so great. In a short evening you can quickly learn about a new technology with enough to get you excited and start you on a path of discovery. A big thank you to Rabeb for travelling down from Bristol to spend the evening with us.

As always, a big thanks too to the amazing volunteers from Madgex who helped me setup and run the evening and to all of the attendees for making time to join us. A final thanks goes to our sponsors for the evening who offered some great prizes and support of our user group.

Prize Draws

With the end of the evening closing in we drew the winners of the prizes from our fantastic sponsors for the event. The prizes we had to offer were:

JetBrains
One year individual subscription to any single JetBrains Toolbox product

Progress
DevCraft Complete License code

Manning
Free eBook

elmah.io
6 month Small Business license

PostSharp
License to PostSharp

Again we use the WPF app created by Dan Clarke, who organises the .NET Oxford meetup. The rules as with the last event were:

a) names are added from the RSVP list (as at about 1 hour before the event)
b) if the name drawn is not in attendance, we redraw.

Next events

We have some great speakers lined up for the next couple of months, and I’m working with a few people of plans for next year too.

.NET South East November 2017 – Michael Newton
Making Distributed Systems in .NET Easier

.NET South East November 2017 – David Arno
Roslyn Analysers

2018 events to be announced soon!

Call for speakers

I’d love to get a range of varied content and speakers to present at our user group. We have a nice pipeline for the coming months but those months will fly by very quickly. If you’d be interested in speaking at a future event we’d love to have you. Please get in touch via the contact form on this blog or ping me on Twitter and we can discuss availability and topics.

I’m really keen to draw as many speakers from our local community too so please let me know if you might be interested in speaking. Perhaps you have presented a talk internally and could open it up to a wider audience. I highly recommend speaking as a way to develop professionally. I’m happy to offer advice for new speakers and help where I can.

If you don’t like the idea of public speaking, you are not alone. Please check out my own story in my recent two part blog series – Part 1 of How to not hate public speaking.

Links

https://www.nexmo.com
https://rabebdiaries.wordpress.com/
https://dev.botframework.com
https://developer.microsoft.com/en-us/cortana