My new book, “Modernizing .NET Web Applications,” is finally out - available for purchase in both printed and digital versions. If you are interested in getting a copy, see the new book’s website.

A pile of copies of my new book Modernizing .NET Web Applications

It was a challenging journey for me, but I liked every moment of it, and it is definitely not my last book. I just need to bump into another topic and get enthusiastic about it (which seems not to be that hard).

I finished the manuscript on the last day of June, but some things had already changed before the book was published. For example, the System.Data.SqlClient package became deprecated. Additionally, all samples in the book used .NET 8, but .NET 9 is just around the corner, and there will be many new and interesting features and performance improvements. Despite the fact that they do not primarily target the area of modernization, they are highly relevant - they constitute one of the strongest arguments for moving away from legacy frameworks. Chapter 2 is dedicated to the benefits of the new versions of .NET, and it is one of the longest chapters of the book. Why else would you modernize if not to use the new platform features?

To ensure you can stay updated, I’ve set up a newsletter where I’ll regularly post all the news concerning the modernization of .NET applications.

The book had an official celebration and announcement at Update Conference Prague, and I’d like to thank all the people around me who helped the book to happen.

Thursday was the first day of our Update Conference Prague - the largest event for .NET developers in the Czech Republic with more than 500 attendees. On the same day, I also had an online session at .NET Confa global online .NET conference organized by Microsoft.

Originally we were thinking of setting up a studio at Update from where I could stream my session, but because I was speaking at another conference the previous week, there was not enough time for testing the setup. I am lucky that the venue where Update takes place is only about 100 meters from the RIGANTI office.

Same as last year, my .NET Conf session was about modernizing legacy applications, but this time not from the technical point of view. I focused on arguments the tech leads or developers can use to explain the risks and benefits to managers and other non-technical stakeholders.

I talked about security risks that are often seen in old libraries and frameworks, then about performance improvements of the new .NET that can translate to significantly lower infrastructure costs. I also discussed hiring and developer happiness - an important criterion in the long-term perspective. Most developers want to use the new versions to keep their knowledge up-to-date, and there is an increasingly large group of developers who started with the new .NET Core and never touched the old stuff.

Then I focused on planning and estimation, and recommended to prepare a “business plan” - evaluate not only risks coming from DOING the migration, but also from NOT DOING it. You can lead a few experiments to get the idea of complexity and make rough estimates based on the results.

The last important thing I mentioned was communication. In many cases, being predictable is way more valuable than deliver on time, for many legacy applications are only internal tools and the deadlines are often only virtual. Some developers complain that spending a few hours making a presentation about the current progress is a waste of time, and use the time to write code in a hope that the deadline will be saved. In reality, these few hours will probably not save the day, and the stakeholders will face an unexpected surprise. Clear and intensive communication can prevent this.

You can find more information on this topic in my new book Modernizing .NET Web Applications. It contains a detailed guide for migrating from ASP.NET Web Forms, ASP.NET MVC, WCF, SignalR, Entity Framework 6, and other .NET Framework technologies to their equivalents in .NET 8 and beyond, and much more.

I got several requests to publish the slide deck. You can find it below:


Recently, we have been fighting with a weird issue that was happenning to one of our ASP.NET Core web apps hosted in Azure App Service.

The app was using ASP.NET Core Identity with cookie authentication. Customers reported to us that the application randomly signs them out.

What confused us was at first the fact that the app was being used mostly on mobile devices as PWA (Progressive Web App), and the iOS version was powered by Cordova. Since we have been fighting with several issues on Cordova, it was our main suspect – originally we thought that Cordova somehow deletes the cookies.

Cookies were all right

Of course, first we made sure that the application issues a correct cookie when you log in. We’ve been using default settings of ASP.NET Core Identity (14 days validity of the cookie and sliding expiration enabled).

We got most of the problem reports from the iOS users - that’s why we started suspecting Cordova. We googled many weird behaviors of cookies, some of them were connected with the new WKWebView component, but most of the articles and forum posts were caused by session cookies which normally get lost when you close the application. Our cookies were permanent with specified expiration of 14 days, so it wasn’t the issue.

It took us some time until we figured out that the issue is not present only in the Cordova app, but everywhere – later we tried to open the app from a browser on PC and it signed us out.

What was strange - the cookie was still there, it was before its expiration, I checked with Fiddler that it was actually sent to the server when I refreshed the page.

But the tokens…

Then I got an idea – maybe the cookie is still valid, but the token inside has expired. Close enough - it wasn’t the real cause, but it helped me find the real problem.

I tried to decode the authentication cookie to see whether there is some token expiration date or anything that could invalidate it after some time.

Thanks to this StackOverflow thread, I created a simple console app, loaded my token in it, and got the error message “The key {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} was not found in the key ring.

var services = new ServiceCollection();
services.AddDataProtection();
var serviceProvider = services.BuildServiceProvider();

var cookie = WebUtility.UrlDecode("<the token from cookie>");

var dataProtectionProvider = serviceProvider.GetRequiredService<IDataProtectionProvider>();
var dataProtector = dataProtectionProvider.CreateProtector("Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationMiddleware", "Identity.Application", "v2");

var specialUtf8Encoding = new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true);
var protectedBytes = Base64UrlTextEncoder.Decode(cookie);
var plainBytes = dataProtector.Unprotect(protectedBytes);
var plainText = specialUtf8Encoding.GetString(plainBytes);

var ticketDataFormat = new TicketDataFormat(dataProtector);
var ticket = ticketDataFormat.Unprotect(cookie);

OK, it is a good thing to get this exception – of course I didn’t have the key on my PC so I wasn’t able to decode the cookie.

So I looked in the Kudu console of my Azure App Service (just go to https://yourappservicename.scm.azurewebsites.net). This console offers many tools and there is an option to browse the filesystem of the web app.

Where are your data protection keys?

The tokens in authentication cookies are encrypted and signed using keys that are provided as part of the ASP.NET Core Data Protection API. There are a lot of options where you can store your keys.

We had the default configuration which stores the keys in the filesystem. When the app is in Azure App Service, the keys are stoted on the following path:

D:\home\ASP.NET\DataProtection-Keys

As the docs says, this folder is backed by network storage and is synchronized across all machines hosting the app. Even if the application is run in multiple instances, they all see this folder and can share the keys.

When I looked in that folder, no key with GUID from the error message was present. That’s the reason why the cookie was not accepted and the app redirected to the login page.

But why the key was not in that folder? I must have been there before, otherwise the app wouldn’t give me that cookie in the first place.

Deployment Slots

By default, the web app runs in another directory so there is no chance that the keys directory can be overwritten during the deployment.

But suddenly I saw the problem – we were using slot deployments. First, we would deploy the app in the staging slot, and after we made sure it is running, we swap the slots. And each slot has its own filesystem. When I opened Kudu for my staging app, the correct key was there.

Quick & Dirty Solution

Because we wanted to resolve the issue as fast as possible, I took the key from the staging slot and uploaded it to the production, and also copied the production key back to the staging slot. Both slots now had the same keys, so all authentication cookies issued by the app could be decoded properly.

When I refreshed the page in my browser, I was immediately signed in (the cookie was still there).

However, this is not a permanent solution – the keys expire every 90 days and are recreated, so you’d need to do the same thing again and again.

Correct Solution

The preferred solution should be storing the keys in Blob Storage and protecting them with Azure KeyVault. This service is designed for such purposes, and if you use the same storage and credentials for staging and production slot, it will work reliably.

By the way, a similar issue will probably occur in Docker containers (unless the keys are stored on some shared persistend volume). The filesystem in the container is ephemeral and any changes may be lost when the container is restarted or recreated.

So, if you are using deployment slots, make sure that both have access to the same data protection keys.

Few weeks ago, I got an idea to implement an interesting feature in DotVVM – the Server-side viewmodel caching. It can have a huge impact on a performance of DotVVM applications as it can reduce the data transferred on postback to almost nothing.

Intro – the basic principles of DotVVM

The idea behind DotVVM was quite simple – we want to use MVVM for building web applications, and we want to write in C#.

That’s why the viewmodel in DotVVM is a C# class and lives on the server where .NET runtime is. In order to have client-side interactivity in the web page, the viewmodel needs to operate on the client-side. Therefore, DotVVM serializes the viewmodel into JSON and includes it with the page HTML.

When the page is loaded, the client-side library of DotVVM will parse the JSON and create a Knockout JS instance of the viewmodel. Thanks to this, the DotVVM controls can use the Knockout data-bind attributes to offer their functionality. DotVVM just translates <dot:TextBox> to <input data-bind=”…” /> to make it working.

When the user decides to click a button, there is a method that needs to be called. However, this method lies on the server. DotVVM has to take the Knockout JS viewodel, serialize it and send it to the server, where it is deserialized so the method has the all the data and state that it needs to run. After the method completes, the viewmodel is serialized again and sent to the client where it is applied to the Knockout JS instance of the viewmodel and all controls in the page are updated.

An entire viewmodel is sent to the server

Changes made on the server are sent to the client

The entire process involves transferring the viewmodel from the server to the client and back. The response to the postback is efficient in general as it doesn’t need to transfer the entire viewmodel. The server compares the current viewmodel with the version received from the client, and sends only the changes.

But because of the stateless nature of DotVVM, the client has to send the entire viewmodel to the server. Or had, to be precise, because this now changes with the Server-side viewmodel caching.

DotVVM offers several mechanisms to prevent the entire viewmodel to be transferred:

  • The Bind attribute can specify the direction in which the data will be transferred.
  • The Static Commands allow to call a method, pass it any arguments and update the viewmodel with the result returned from the call.
  • REST API bindings can load additional data from a REST API which are not considered as a part of the viewmodel and therefore are not transferred on postbacks.

However, each method has some limitations and is more difficult to use. The comfort of using Command Binding which triggers a full postback is very tempting.

What about storing the viewmodel on the server?

The reason for sending the entire viewmodel on the server is simple – the viewmodel is not stored anywhere. When the server completes the HTTP request and sends the viewmodel to the client, it forgets about it immediately.

The server-side caching feature will change this behavior – the viewmodel will be kept on the server (in a JSON-serialized form, so the live object with dependencies to various application services could be garbage-collected) and the client will send only the diff on postback.

Only the changes are sent to the server, the rest is retrieved from viewmodel cache

Storing the viewmodel on the server introduces several challenges:

  • It will require more server resources. The viewmodels are not large in general (the average size is 1-15 kB based on the complexness of the page) and they can be compressed thanks to their text-based nature.
  • It can make DOS attacks easier it an attacker finds a way to exhaust server resources.
  • When the application is hosted on a web farm, the cache must be distributed.
  • What about cache synchronization in case of multiple postbacks?
  • Is the cache worth the troubles at all?

During our use of DotVVM on customer projects, we have made several observations:

  • When DotVVM is used on a public-facing websites, the viewmodels are tiny and mostly static. It is very frequent that all HTTP GET requests have the same viewmodel and it changes only then the user interacts with the page (e.g. enters a value in a textbox).
  • When DotVVM is used in line of business applications with many GridView controls, the most of the viewmodel is occupied by the contents of the GridView. If the user doesn’t use the inline edit functionality, the GridView is read-only and there is not much value in transferring its contents back to the server – the server can retrieve the most current data from the database.

It is obvious that the server-side caching will not help much in the first case, however it will help a lot in the second case.

Imagine a page with a GridView control with many rows. Each row will contain a button that can delete the particular row.

The viewmodel will contain a collection of objects representing each row. The data are read-only and thus cannot change. When the delete button is clicked, the viewmodel doesn’t need to be transferred to the server at all – we have saved almost 100%.

There is still some metadata that need to be transferred, like the cached viewmodel ID, CSRF token, and also the encrypted and protected values are excluded from the caching mechanism. But this data are relatively small in comparison to the GridView data.

Even if the user decides to use the inline editing functionality and updates a particular row, only the changes made in the viewmodel will be transferred. If there was 50 rows and one was changed, we can save about 98% of the data.

The viewmodels have 1 to 15kBs in average, so it’s not such a big deal, but still, when you multiply it by the number of concurrent users, or consider the users using a cellular networks, the difference can be quite significant.

Deduplication of cache entries

The observation for public-facing websites mentioned in the previous section brings another challenge – imagine there are thousands of users visiting the website. Most of them will leave immediately without making any postback, or they will just browse a few pages without any other of interaction that would trigger the postback.

As was mentioned before, the viewmodels in this case can be static. They will contain a few values that are used by the page, but their values will be the same when the page is loaded.

Imagine a page with a contact form. The viewmodel will contain properties for the subject, message contents and reply e-mail address, but they will be empty unless the user change them.

That’s why we’ve decided to use a hash of the viewmodel as the cache key. These pages will not exhaust the cache with thousands of equal entries because they will get the same key. This will allow to have just one cache entry for each page that will be shared between all its users (unless they change something and make a postback).

The encrypted and protected values are excluded from the caching mechanism, so it should not bring any security issue. When the user changes the viewmodel, it will get a different hash and will be stored in a separate cache entry.

Can the cache entry expire?

Of course it can. Most of us have probably had issues with expired sessions. But thankfully, this will not be the case of DotVVM. We always have the most current viewmodel on the client, so when the postback is sent and the server cannot find the viewmodel in cache, it will respond that there is a cache miss. In this case, DotVVM will automatically make an additional (full) postback sending the entire viewmodel to the server. Unless the authentication cookie is still valid, the postback will be performed – it will be just a little slower than usual.

The problem is now reduced in fine-tuning the cache settings – choosing a good compromise between the lifetime of the cached viewmodels and the cache size (and a proper storage – it may not be efficient to store the data in-memory).

It will take a lot of measurements and probably creating some tools which can help with making informed decisions on how to set up the cache correctly.

Can I try it now?

Not yet, but very soon. I have just added a new API for turning on experimental features. But in the next preview release of DotVVM, there will be an option to turn this feature on globally, or only for a specific pages.

Recently, I have written a series of articles on modernizing ASP.NET Web Forms apps. Now this topic became even more important thanks to the recent announcement of .NET 5. It was made clear that ASP.NET Web Forms will not be ported to .NET Core.

TLDR: DotVVM can run side by side with ASP.NET Web Forms, and it also supports .NET core. You can install it in your Web Forms project and start rewriting individual pages from Web Forms to DotVVM (similar syntax and controls with MVVM approach) while still being able to add new features or deploy the app to the server. After all the Web Forms code is gone, you can just switch the project to .NET Core. See the sample repo.

To rewrite or continuously upgrade

There are still plenty of ASP.NET Web Forms applications out in the world and their authors now stand by a difficult decision:

  • Throwing the old application away and rewrite it from scratch using modern stack
  • Trying to continously modernize the app and rewrite all the pages on-the-fly

The first option – total rewrite – is very time consuming. If the original application was developed for more 10 years, which is not uncommon, I can hardly imagine that it can be rewritten it in less than half of that time. In addition, when the application needs to support company daily tasks and workflows while responding to rapidly changing business needs, it is impossible to stop adding new features for months or even years because of the rewrite.

Of course, the company can build a new team that will develop the new version while keeping the old team maintaining and extending the old app, but it means double effort and a vast amount of time required to transfer the domain knowledge from the old team to the new one. Also, many things will need to be done twice, and it will probably take years until the new version is ready for production.

And finally, the management never likes to hear about rewriting the software from scratch. I have seen many situations where the project leads had to fight very hard in order to justify such decision.

The second option – the continuos modernization – looks a little bit easier. Imagine you have a Web Forms application with hundreds of ASPX pages. If you can rewrite one page per day using other technology and integrate the new pages with the old ones so the user won’t notice they are made with different stacks, after several months you can get rid of all of the ASPX pages and stay with a more modern solution. It may not be perfect as there will still be some legacy code, but it is much better than nothing, and if you are lucky and don’t use WCF or Workflow Foundation which are also not supported on .NET Core, you will be able to move the project to .NET Core.

Two projects? Possible, but maybe more difficult than it has to be.

But how to do it? Let’s suppose we have an old app that needs to be maintained.

Shall we create a new ASP.NET Core project that would run side by side, maybe on the same domain, and make links from the old to new pages and vice versa?

It can work if the same CSS styles are used. The user should not be able to tell that he actually uses two web apps.

However, there may be some issues with sign-on as the new app can use different authentication cookies than the old one – the authentication will need to be integrated somehow. Also, if session is used (which is not a good idea in general, but it is also quite frequent), it will not be shared between the two applications.

Moreover, this will require some configuration changes on the server, and the deployment model will need to be changed as you will now deploy two applications instead of one.

If the application caches some data in memory, you may also run into various concurrency issues as the caches will need to be invalidated. There will also be some duplication if the business layer is not properly separated from the UI.

What is more, if you decide to use Angular, React or other JavaScript framework, there is also a large amount of knowledge required to start working with these technologies. The business logic and data will have to be exposed through a REST API (or Graph QL), which may be an additional effort to set up at the beginning.

DotVVM can make this simpler

What if there is a framework that can be run side by side with ASP.NET Web Forms in one application, but works also with the newest versions of ASP.NET Core?

It would make so many things easier. You will have just one project to deploy. There will be no changes in the deployment model – it will still be an ASP.NET application. You won’t need to take care about sharing the user identity between two apps because there won’t be two apps.

With DotVVM, it is quite easy. It was one of our initial thoughts that lead us to start with the project. If you haven’t heard of it – it is an open source MVVM framework supporting both ASP.NET and ASP.NET Core. It has nice Visual Studio integration and recently joined the .NET Foundation.

How does the migration work?

You can install DotVVM NuGet package in the Web Forms application and it will run side by side with the ASPX pages that are in the project.

From the deployment perspective, there are no changes – it is still the same ASP.NET application that gets deployed to the server as usual.

You can start with copying the Web Forms master page and converting it in the DotVVM syntax. It is different, but not much – most of the controls have the same names, except that you are using the MVVM approach. Use the same CSS so the users won’t notice the change.

Then, you can start rewriting all the pages one by one from Web Forms to DotVVM. DotVVM contains similar controls like GridView, Repeater, FileUpload and more. The most difficult part will be extracting the business logic from the ASPX code behind to the DotVVM viewmodel, but it is still C#.

If your business layer was propely separated, it should be trivial. If not, take this as an opportunity to do the refactoring and get the cleaner code. Thanks to the MVVM approach, your viewmodels will be testable and the overall quality of the application will greatly improve.

DotVVM pages will share the environment with the ASP.NET ones, including the current user identity. You won’t need to expose your business logic through a REST API, you can keep the same code interacting with the database.

At each point of the process, the application works, can be extended with new features, and can be deployed. The team is not locked to the migration and can do other things simultaneously.

After a few months, when all the ASPX pages are rewritten in DotVVM, you will be able to create a new ASP.NET Core project, move all DotVVM pages and viewmodels into it and use them with .NET Core. The syntax of DotVVM is the same on both platforms.

If you have been using Forms Authentication in Web Forms, you will need to switch it to ASP.NET Core Cookies, but that should be an easy-enough change.

Are there any samples?

Yes, I have created a GitHub repo which describes the process in detail. There are five branches, each one displaying one of the steps.

In the first one, there is a simple ASP.NET Web Forms application. In the last one, there is the same app rewritten in DotVVM and running on .NET Core.

We have used this way to migrate several Web Forms applications. If the business layer is separated properly, rewrite on one page takes about 1 hour in average. If the business logic is tighen up with the UI, it can take significantly more time, but it can be a way to improving the application testability and I think it is worth – even poorly written apps can be saved using this way.

What if I need help?

We’ll be happy to help you. You can also contact the DotVVM team on our Gitter chat. Check out the DotVVM documentation and the cheat-sheet of differences between Web Forms and DotVVM.