I’m not 100% sure where this series will take me but I would like to cover working with things like Span<T> and Memory<T>, using System.IO.Pipelines, the new JSON parser in .NET Core 3.0 and any related topics that crop up.
Why Write High-Performance Code?
I’ve always been interested in the internals of things like the MVC framework and .NET Core. By understanding how the Microsoft teams have designed and built the framework I feel I improve my own coding ability and I can make better use of the product. Something which has been a focus for the teams since .NET Core and ASP.NET Core 1.0 has been performance. In the last year or so this has seen an even greater priority, so much so, new types and runtime features have been introduced to support high-performance scenarios. I’ve watched some of the great names from the .NET Community speaking about and posting about performance, and I decided it was high time that I took a deeper look too.
I follow a lot of great people from Microsoft and the .NET community on Twitter and get a daily stream of inspirational content. Seeing posts by other developers who are making ASP.NET Core faster and more efficient and using new language and framework features in their work is very inspirational. As a self-taught developer, I often feel it’s up to me to learn more about anything which I do understand. Many of the new performance-related features are a mystery to me. I understand at a conceptual level that they exist and that they are improving software, but I’m often not sure why and more importantly, how they do this.
Anything I don’t understand; I like to learn about, so I’ve spent time reading up on performance topics in blog posts, by watching videos and by attending talks at conferences. The list is simply too huge to list here but some of the key inspirations for me are:
Writing High-Performance .NET Code by Ben Watson
Pro .NET Memory Management: For Better Code, Performance, and Scalability by Konrad Kokosa
Blogs and talks by Adam Sitnik
Posts by Marc Gravell
Tweets by many members of the .NET community, including Ben Adams and David Fowler
Microsoft blogs by Stephen Toub
This is by no means an exhaustive list. I’ve consumed so many great posts and information from the wider community. I’m hugely grateful we have so much content out there!
What is Performance?
When I talk about high performance in this series, I’m referring to two main concepts. The first is making the code run faster so that operations take less time to complete. In the context of a web application, this could result is faster page load times or faster API responses. For service worker style applications which process some in-coming data in order to generate an output, this relates to the overall throughput of processing. A second important factor from my perspective is reducing memory usage and allocations. These two concepts converge in my mind as “doing more, with less”.
A typical example of where I see performance being a focus for my day-to-day work is in data processing workflows. A lot of my time is spent developing functionality which supports ingesting some data, usually via an AWS queue, and performing some work based on that message. Many of our services have grown and developed over time and today are processing large volumes of data. One such example at work is a queue processor which handles around 17-20 million messages per day. After reading the message, a workflow processes the data, validating it, enriching it and shaping it ready for storage to S3 and an ElasticSearch cluster. This process isn’t hugely complex, but due to the volume, we have periods where we have to horizontally scale our container instances to ensure we continue to achieve the desired throughput.
Can we be more efficient, can we reduce the processing time and memory consumption? I’m absolutely sure we can.
Often these two pillars of performance are intrinsically linked and affecting one, can affect the other. Reducing memory allocations in code, for example, can reduce GC load and reduce pauses caused by the collection process, which in turn may improve the overall speed.
The Premature Optimisation Problem
It’s worth me addressing the elephant in the room that’ll likely become a point of debate for many as I progress through this series. Am I prematurely optimising and spending too much time in the depths of complex, performance-focused code? That may be the case in some of my examples. I’m conscious that the steps I am using often take longer to code, are less readable in some cases and therefore add debt should the code need changes in the future. I’m attempting to push the limits in my experiments in order to find the right place to draw the line.
I’m not a big fan of the term premature optimisation as it is sometimes thrown out too quickly in discussions and can lead to performance being ignored entirely. I strongly believe performance should be a feature of all code we write to a greater or lesser degree. Deciding how important it is, and what level to focus on should be a discussion upfront in story planning. Teams should, in my opinion, understand the requirements of each feature they are building. They should review the expected use and longer-term growth that is anticipated in order to properly discuss the areas which could impact on performance.
Sometimes this may mean that no specific actions are needed, at other times it may identify huge future scaling needs that should be considered early on so that early steps are taken to avoid a complete rewrite in the future. When I speak about performance here, it could be as simple as ensuring that queries made via Entity Framework use the AsNoTracking support to reduce overhead or it could mean custom parsing of byte data to avoid allocations. The scope of what level is appropriate should be decided by an up-front early discussion.
I’m not advocating that everyone immediately spend weeks of time rewriting existing code to use Span<T> and many of the other shiny .NET Core features. What I am advocating is an awareness of the tools that exist to write fast, low-allocation code in order to inform decisions on when it is appropriate to utilise them. If you have code or services which are struggling to keep up, schedule some time to review the problem. Benchmark and profile the application to learn more about what the problem is and where appropriate, begin to use some of the new tools to improve the code and stabilise the service.
I cited an example of a queue processor that I maintain which handles many millions of events. When it was first written in .NET Core 1.0 over two and a half years ago, we made some choices which were valid at the time. As its use has grown and we understand the requirements more fully, it’s now something we plan to review. Our current solution of scaling out instances to deal with load works well, but I’m sure we can save money if we reduce the need to scale as often. As a business, we need to balance the cost of change vs the cost of compensating with scaling.
For new queue processors, we can learn from this experience and factor in their scaling requirements up-front. Will the new service grow rapidly? Will it need to handle greater and greater volumes of events or data? Can we do things differently and set a better precedent for future services? I firmly believe we can and I’ve spent some time prototyping new ideas in those areas. I’ll continue to focus time, both at work and at home, finding the limits of these features, learning about when it is appropriate to use them and hopefully bringing readers of this blog along for the ride.
Whilst I’ve not covered anything specific in this post, I hope this post serves to explain why I’m excited to learn more about the performance features. I’ll be sharing posts in the coming weeks and months to share my experience with each of the things I investigate.
Thanks for reading! If you’d like to read more about high-performance .NET and C# code, you can see my full blog post series here.