As I write this, less than a week after the end of .NET Conf 2023 and the release of .NET 8, .NET Aspire is in preview version (8.0.0-preview.1.23557.2). Therefore, it’s possible that some aspects may have changed by the time you read this article.
During the .NET Conf 2023, Microsoft announced .NET Aspire, a new .NET workload designed to ease the development of applications and microservices in a cloud-native context. Having personally experienced difficulties with developing and orchestrating multiple microservices in a local environment, I was pleasantly surprised by this announcement.
If you haven’t yet seen the deep-dive video by Glenn Condron and David Fowler about .NET Aspire, I invite you to immediately stop reading this article and watch it. It will better equip you to understand the rest of this discussion.
This isn’t just another high-level introductory article on .NET Aspire. I’m sure many others have already done that, and done it better than I could. What I want to delve into here concerns the inner workings of .NET Aspire, beyond its open-source code.
Being very familiar with the source code of the Tye project — the experiment that inspired Microsoft’s development teams to create .NET Aspire — one of my first reactions was to try to understand the internals of .NET Aspire. Specifically, I was interested in how it orchestrates the resources developers declare in their .NET Aspire host. How does .NET Aspire compile and launch other projects? How does it manage the lifecycle of arbitrary executables? How does it interact with the Docker engine to start containers? How does service discovery work?
In the next few minutes, you will discover that .NET Aspire, as it was presented, is just the tip of the iceberg. Indeed, .NET Aspire is built on top of an undocumented orchestrator, also developed by Microsoft. This is the Microsoft Developer Control Plane, otherwise referred to by the acronym DCP. In short, DCP is a sort of miniature Kubernetes, which can be controlled with tools such as
Continue reading “Exploring the Microsoft Developer Control Plane at the heart of the new .NET Aspire”
kubectl or the official C# client for Kubernetes.
As a .NET solution grows, the time spent on Roslyn analyzers during compilation increases. I have witnessed a web solution where the execution time of the Roslyn analyzers was simply absurd. In particular, the ratio was 70% of the time spent on the Roslyn analyzers and 30% on the rest of the compilation. The total build time was about 2 to 3 minutes depending on the machine’s specifications.
Such a compilation duration has a direct impact on three points:
- The productivity of developers, whose time spent waiting each day can reach significant values.
- Metrics related to delivery and continuous deployment. A slower compilation means longer validation times for pull requests and release publications.
- The satisfaction and motivation of developers, who are frustrated by waiting with each code change.
The compilation duration of a .NET solution can be justified by several aspects, including the amount of code, dependencies between projects, coupling between projects, etc. In this first article, we will focus only on the impact of Roslyn analyzers on the compilation time. After reading this article, you will be able to identify their impact on your .NET solutions and take action to reduce it, without neglecting the quality and assurance they provide.
Continue reading “Optimizing C# code analysis for quicker .NET compilation”
In this blog post, we’re going to explore different Docker Compose setups for you to run a MongoDB replica set locally. Replica sets are a must-have for anyone wanting to leverage MongoDB’s powerful features like transactions, change streams, or accessing the oplog. Locally running a MongoDB replica set not only grants you access to these functionalities but also serves as a disposable sandbox to experiment with replication mechanics and fault tolerance in general. Let’s not wait any longer, and let’s get started!
Continue reading “The only local MongoDB replica set with Docker Compose guide you’ll ever need!”
I’ve been using ChatGPT Plus for many months now. Like many others, I use it for simple tasks like spell-checking and more complex ones like brainstorming. It’s been great for my personal and work projects. But I wonder if the USD $20 per month fee is worth it for how often I use it. I only interact with ChatGPT a few times a week. If I used the OpenAI API, which charges as you go, I might only pay around $3 per month.
That’s one big reason I wanted to set up my own ChatGPT frontend. Not only would I pay for what I use, but I could also let my family use GPT-4 and keep our data private. My wife could finally experience the power of GPT-4 without us having to share a single account nor pay for multiple accounts.
It’s been a while since I did any serious web frontend work. I thought about increasing my Angular knowledge to make my own ChatGPT. I was ready for many late nights working on this. But then, in early August, I found this microsoft/azurechat on GitHub. Microsoft recently created this Azure Chat repository on July 11. Here’s a quote from their README:
Azure Chat Solution Accelerator powered by Azure Open AI Service is a solution accelerator that allows organisations to deploy a private chat tenant in their Azure Subscription, with a familiar user experience and the added capabilities of chatting over your data and files.
I tried it right away. In just 4 hours, I was able to set up my own private ChatGPT using Docker, Azure, and Cloudflare. The Azure Chat docs mostly talk about connecting with Azure OpenAI Service, and this service is currently in preview with limited access. Even though, I managed to connect it to the OpenAI API, which everyone can use. In this blog post, I’ll show you how to do the same.
Continue reading “Your own private ChatGPT in hours? Azure Chat makes it possible!”
Have you ever felt frustrated when updating a NuGet package, only to have your build fail because the new version of the package introduced a breaking change? Or perhaps you’re the author of a NuGet package and you’re determined to avoid introducing breaking changes? Ever wonder how Microsoft maintains backwards compatibility in ASP.NET Core for years? There’s of course a lot of design involved, but one tool they use is their Microsoft.CodeAnalysis.PublicApiAnalyzers NuGet package. As the name suggests, it’s a set of Roslyn analyzers to keep track of your public API. It’s used by the .NET team, the Azure SDK team, various other Microsoft projects, and numerous open-source libraries such as Dapper and Polly.
In this post, I will guide you on designing .NET class libraries to prevent breaking changes and demonstrate how to leverage the
Continue reading “Preventing breaking changes in .NET class libraries”
Microsoft.CodeAnalysis.PublicApiAnalyzers package to enforce these principles.