Today, I ran into an unusual behavior of ASP.NET Core application deployed to Azure App Service. It could not find the connection string, even though it was present on the Connection Strings pane of the App Service in the Azure portal.

This is how the screen looked like:

Screenshot of the connection string in Azure portal

The application was accessing the connection string using the standard ASP.NET Core configuration API, as shown in the following code snippet:

sevices.AddNpgsqlDataSource(configuration.GetConnectionString("aisistentka-db"));

Naturally, everything works as expected locally, but when I deployed the app to Azure, it did not start, with the exception “Host can’t be null.”

When diagnosing this kind of issues, it is a good idea to start with the Kudu console (located at https://your-site-name.scm.azurewebsites.net). A quick check of the environment variables usually shows what is wrong.

Every connection string should be passed to the application as an environment variable. Normally, the ASP.NET Core’s GetConnectionString method should look for the ConnectionStrings:connectionStringName configuration key (which is usually in the appsettings.json file or in User Secrets). Since environment variables cannot contain colons, they can be replaced with double underscores – the .NET configuration system treats these separators as equal.

However, the type field in the Azure portal (you can see it in the picture at the beginning of the article) provides a special behavior and somehow controls how the environment variable names are composed. In the case of PostgreSQL, the resulting variable name is POSTGRESQLCONNSTR_aisistentkadb. As you can see, instead of ConnectionStrings__ prefix, the prefix is POSTGRESQLCONNSTR_, and the dash from the connection string name is removed.

This was a bit unexpected for me. The GetConnectionString method cannot see the variable, but when I use the type “SQL Server”, the same approach works (though, the dashes in connection string names do not). How is this possible?

I looked in the source code of .NET and found out that there is a special treatment of these Azure prefixes, but not all of them are included. It only supports SQL Server, SQL Azure, MySQL, and Custom types. All other options will produce an incorrect name of environment variable that the application will not find.

    /// <summary>
    /// Provides configuration key-value pairs that are obtained from environment variables.
    /// </summary>
    public class EnvironmentVariablesConfigurationProvider : ConfigurationProvider
    {
        private const string MySqlServerPrefix = "MYSQLCONNSTR_";
        private const string SqlAzureServerPrefix = "SQLAZURECONNSTR_";
        private const string SqlServerPrefix = "SQLCONNSTR_";
        private const string CustomConnectionStringPrefix = "CUSTOMCONNSTR_";
...

The solution was to use the Custom type and remove the dash from the connection string name.

My new book, “Modernizing .NET Web Applications,” is finally out - available for purchase in both printed and digital versions. If you are interested in getting a copy, see the new book’s website.

A pile of copies of my new book Modernizing .NET Web Applications

It was a challenging journey for me, but I liked every moment of it, and it is definitely not my last book. I just need to bump into another topic and get enthusiastic about it (which seems not to be that hard).

I finished the manuscript on the last day of June, but some things had already changed before the book was published. For example, the System.Data.SqlClient package became deprecated. Additionally, all samples in the book used .NET 8, but .NET 9 is just around the corner, and there will be many new and interesting features and performance improvements. Despite the fact that they do not primarily target the area of modernization, they are highly relevant - they constitute one of the strongest arguments for moving away from legacy frameworks. Chapter 2 is dedicated to the benefits of the new versions of .NET, and it is one of the longest chapters of the book. Why else would you modernize if not to use the new platform features?

To ensure you can stay updated, I’ve set up a newsletter where I’ll regularly post all the news concerning the modernization of .NET applications.

The book had an official celebration and announcement at Update Conference Prague, and I’d like to thank all the people around me who helped the book to happen.

Thursday was the first day of our Update Conference Prague - the largest event for .NET developers in the Czech Republic with more than 500 attendees. On the same day, I also had an online session at .NET Confa global online .NET conference organized by Microsoft.

Originally we were thinking of setting up a studio at Update from where I could stream my session, but because I was speaking at another conference the previous week, there was not enough time for testing the setup. I am lucky that the venue where Update takes place is only about 100 meters from the RIGANTI office.

Same as last year, my .NET Conf session was about modernizing legacy applications, but this time not from the technical point of view. I focused on arguments the tech leads or developers can use to explain the risks and benefits to managers and other non-technical stakeholders.

I talked about security risks that are often seen in old libraries and frameworks, then about performance improvements of the new .NET that can translate to significantly lower infrastructure costs. I also discussed hiring and developer happiness - an important criterion in the long-term perspective. Most developers want to use the new versions to keep their knowledge up-to-date, and there is an increasingly large group of developers who started with the new .NET Core and never touched the old stuff.

Then I focused on planning and estimation, and recommended to prepare a “business plan” - evaluate not only risks coming from DOING the migration, but also from NOT DOING it. You can lead a few experiments to get the idea of complexity and make rough estimates based on the results.

The last important thing I mentioned was communication. In many cases, being predictable is way more valuable than deliver on time, for many legacy applications are only internal tools and the deadlines are often only virtual. Some developers complain that spending a few hours making a presentation about the current progress is a waste of time, and use the time to write code in a hope that the deadline will be saved. In reality, these few hours will probably not save the day, and the stakeholders will face an unexpected surprise. Clear and intensive communication can prevent this.

You can find more information on this topic in my new book Modernizing .NET Web Applications. It contains a detailed guide for migrating from ASP.NET Web Forms, ASP.NET MVC, WCF, SignalR, Entity Framework 6, and other .NET Framework technologies to their equivalents in .NET 8 and beyond, and much more.

I got several requests to publish the slide deck. You can find it below:


Recently, we have been fighting with a weird issue that was happenning to one of our ASP.NET Core web apps hosted in Azure App Service.

The app was using ASP.NET Core Identity with cookie authentication. Customers reported to us that the application randomly signs them out.

What confused us was at first the fact that the app was being used mostly on mobile devices as PWA (Progressive Web App), and the iOS version was powered by Cordova. Since we have been fighting with several issues on Cordova, it was our main suspect – originally we thought that Cordova somehow deletes the cookies.

Cookies were all right

Of course, first we made sure that the application issues a correct cookie when you log in. We’ve been using default settings of ASP.NET Core Identity (14 days validity of the cookie and sliding expiration enabled).

We got most of the problem reports from the iOS users - that’s why we started suspecting Cordova. We googled many weird behaviors of cookies, some of them were connected with the new WKWebView component, but most of the articles and forum posts were caused by session cookies which normally get lost when you close the application. Our cookies were permanent with specified expiration of 14 days, so it wasn’t the issue.

It took us some time until we figured out that the issue is not present only in the Cordova app, but everywhere – later we tried to open the app from a browser on PC and it signed us out.

What was strange - the cookie was still there, it was before its expiration, I checked with Fiddler that it was actually sent to the server when I refreshed the page.

But the tokens…

Then I got an idea – maybe the cookie is still valid, but the token inside has expired. Close enough - it wasn’t the real cause, but it helped me find the real problem.

I tried to decode the authentication cookie to see whether there is some token expiration date or anything that could invalidate it after some time.

Thanks to this StackOverflow thread, I created a simple console app, loaded my token in it, and got the error message “The key {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} was not found in the key ring.

var services = new ServiceCollection();
services.AddDataProtection();
var serviceProvider = services.BuildServiceProvider();

var cookie = WebUtility.UrlDecode("<the token from cookie>");

var dataProtectionProvider = serviceProvider.GetRequiredService<IDataProtectionProvider>();
var dataProtector = dataProtectionProvider.CreateProtector("Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationMiddleware", "Identity.Application", "v2");

var specialUtf8Encoding = new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true);
var protectedBytes = Base64UrlTextEncoder.Decode(cookie);
var plainBytes = dataProtector.Unprotect(protectedBytes);
var plainText = specialUtf8Encoding.GetString(plainBytes);

var ticketDataFormat = new TicketDataFormat(dataProtector);
var ticket = ticketDataFormat.Unprotect(cookie);

OK, it is a good thing to get this exception – of course I didn’t have the key on my PC so I wasn’t able to decode the cookie.

So I looked in the Kudu console of my Azure App Service (just go to https://yourappservicename.scm.azurewebsites.net). This console offers many tools and there is an option to browse the filesystem of the web app.

Where are your data protection keys?

The tokens in authentication cookies are encrypted and signed using keys that are provided as part of the ASP.NET Core Data Protection API. There are a lot of options where you can store your keys.

We had the default configuration which stores the keys in the filesystem. When the app is in Azure App Service, the keys are stoted on the following path:

D:\home\ASP.NET\DataProtection-Keys

As the docs says, this folder is backed by network storage and is synchronized across all machines hosting the app. Even if the application is run in multiple instances, they all see this folder and can share the keys.

When I looked in that folder, no key with GUID from the error message was present. That’s the reason why the cookie was not accepted and the app redirected to the login page.

But why the key was not in that folder? I must have been there before, otherwise the app wouldn’t give me that cookie in the first place.

Deployment Slots

By default, the web app runs in another directory so there is no chance that the keys directory can be overwritten during the deployment.

But suddenly I saw the problem – we were using slot deployments. First, we would deploy the app in the staging slot, and after we made sure it is running, we swap the slots. And each slot has its own filesystem. When I opened Kudu for my staging app, the correct key was there.

Quick & Dirty Solution

Because we wanted to resolve the issue as fast as possible, I took the key from the staging slot and uploaded it to the production, and also copied the production key back to the staging slot. Both slots now had the same keys, so all authentication cookies issued by the app could be decoded properly.

When I refreshed the page in my browser, I was immediately signed in (the cookie was still there).

However, this is not a permanent solution – the keys expire every 90 days and are recreated, so you’d need to do the same thing again and again.

Correct Solution

The preferred solution should be storing the keys in Blob Storage and protecting them with Azure KeyVault. This service is designed for such purposes, and if you use the same storage and credentials for staging and production slot, it will work reliably.

By the way, a similar issue will probably occur in Docker containers (unless the keys are stored on some shared persistend volume). The filesystem in the container is ephemeral and any changes may be lost when the container is restarted or recreated.

So, if you are using deployment slots, make sure that both have access to the same data protection keys.

Few weeks ago, I got an idea to implement an interesting feature in DotVVM – the Server-side viewmodel caching. It can have a huge impact on a performance of DotVVM applications as it can reduce the data transferred on postback to almost nothing.

Intro – the basic principles of DotVVM

The idea behind DotVVM was quite simple – we want to use MVVM for building web applications, and we want to write in C#.

That’s why the viewmodel in DotVVM is a C# class and lives on the server where .NET runtime is. In order to have client-side interactivity in the web page, the viewmodel needs to operate on the client-side. Therefore, DotVVM serializes the viewmodel into JSON and includes it with the page HTML.

When the page is loaded, the client-side library of DotVVM will parse the JSON and create a Knockout JS instance of the viewmodel. Thanks to this, the DotVVM controls can use the Knockout data-bind attributes to offer their functionality. DotVVM just translates <dot:TextBox> to <input data-bind=”…” /> to make it working.

When the user decides to click a button, there is a method that needs to be called. However, this method lies on the server. DotVVM has to take the Knockout JS viewodel, serialize it and send it to the server, where it is deserialized so the method has the all the data and state that it needs to run. After the method completes, the viewmodel is serialized again and sent to the client where it is applied to the Knockout JS instance of the viewmodel and all controls in the page are updated.

An entire viewmodel is sent to the server

Changes made on the server are sent to the client

The entire process involves transferring the viewmodel from the server to the client and back. The response to the postback is efficient in general as it doesn’t need to transfer the entire viewmodel. The server compares the current viewmodel with the version received from the client, and sends only the changes.

But because of the stateless nature of DotVVM, the client has to send the entire viewmodel to the server. Or had, to be precise, because this now changes with the Server-side viewmodel caching.

DotVVM offers several mechanisms to prevent the entire viewmodel to be transferred:

  • The Bind attribute can specify the direction in which the data will be transferred.
  • The Static Commands allow to call a method, pass it any arguments and update the viewmodel with the result returned from the call.
  • REST API bindings can load additional data from a REST API which are not considered as a part of the viewmodel and therefore are not transferred on postbacks.

However, each method has some limitations and is more difficult to use. The comfort of using Command Binding which triggers a full postback is very tempting.

What about storing the viewmodel on the server?

The reason for sending the entire viewmodel on the server is simple – the viewmodel is not stored anywhere. When the server completes the HTTP request and sends the viewmodel to the client, it forgets about it immediately.

The server-side caching feature will change this behavior – the viewmodel will be kept on the server (in a JSON-serialized form, so the live object with dependencies to various application services could be garbage-collected) and the client will send only the diff on postback.

Only the changes are sent to the server, the rest is retrieved from viewmodel cache

Storing the viewmodel on the server introduces several challenges:

  • It will require more server resources. The viewmodels are not large in general (the average size is 1-15 kB based on the complexness of the page) and they can be compressed thanks to their text-based nature.
  • It can make DOS attacks easier it an attacker finds a way to exhaust server resources.
  • When the application is hosted on a web farm, the cache must be distributed.
  • What about cache synchronization in case of multiple postbacks?
  • Is the cache worth the troubles at all?

During our use of DotVVM on customer projects, we have made several observations:

  • When DotVVM is used on a public-facing websites, the viewmodels are tiny and mostly static. It is very frequent that all HTTP GET requests have the same viewmodel and it changes only then the user interacts with the page (e.g. enters a value in a textbox).
  • When DotVVM is used in line of business applications with many GridView controls, the most of the viewmodel is occupied by the contents of the GridView. If the user doesn’t use the inline edit functionality, the GridView is read-only and there is not much value in transferring its contents back to the server – the server can retrieve the most current data from the database.

It is obvious that the server-side caching will not help much in the first case, however it will help a lot in the second case.

Imagine a page with a GridView control with many rows. Each row will contain a button that can delete the particular row.

The viewmodel will contain a collection of objects representing each row. The data are read-only and thus cannot change. When the delete button is clicked, the viewmodel doesn’t need to be transferred to the server at all – we have saved almost 100%.

There is still some metadata that need to be transferred, like the cached viewmodel ID, CSRF token, and also the encrypted and protected values are excluded from the caching mechanism. But this data are relatively small in comparison to the GridView data.

Even if the user decides to use the inline editing functionality and updates a particular row, only the changes made in the viewmodel will be transferred. If there was 50 rows and one was changed, we can save about 98% of the data.

The viewmodels have 1 to 15kBs in average, so it’s not such a big deal, but still, when you multiply it by the number of concurrent users, or consider the users using a cellular networks, the difference can be quite significant.

Deduplication of cache entries

The observation for public-facing websites mentioned in the previous section brings another challenge – imagine there are thousands of users visiting the website. Most of them will leave immediately without making any postback, or they will just browse a few pages without any other of interaction that would trigger the postback.

As was mentioned before, the viewmodels in this case can be static. They will contain a few values that are used by the page, but their values will be the same when the page is loaded.

Imagine a page with a contact form. The viewmodel will contain properties for the subject, message contents and reply e-mail address, but they will be empty unless the user change them.

That’s why we’ve decided to use a hash of the viewmodel as the cache key. These pages will not exhaust the cache with thousands of equal entries because they will get the same key. This will allow to have just one cache entry for each page that will be shared between all its users (unless they change something and make a postback).

The encrypted and protected values are excluded from the caching mechanism, so it should not bring any security issue. When the user changes the viewmodel, it will get a different hash and will be stored in a separate cache entry.

Can the cache entry expire?

Of course it can. Most of us have probably had issues with expired sessions. But thankfully, this will not be the case of DotVVM. We always have the most current viewmodel on the client, so when the postback is sent and the server cannot find the viewmodel in cache, it will respond that there is a cache miss. In this case, DotVVM will automatically make an additional (full) postback sending the entire viewmodel to the server. Unless the authentication cookie is still valid, the postback will be performed – it will be just a little slower than usual.

The problem is now reduced in fine-tuning the cache settings – choosing a good compromise between the lifetime of the cached viewmodels and the cache size (and a proper storage – it may not be efficient to store the data in-memory).

It will take a lot of measurements and probably creating some tools which can help with making informed decisions on how to set up the cache correctly.

Can I try it now?

Not yet, but very soon. I have just added a new API for turning on experimental features. But in the next preview release of DotVVM, there will be an option to turn this feature on globally, or only for a specific pages.