Parts of the series

.NET Framework has always been very popular and widely used in enterprise environment. There are tons of business-oriented applications written in .NET.

From its first version, .NET enabled the developers to build both desktop and web applications, and especially at the beginning, it was really easy to use. Amateur developers, students or even kids were able to create a simple Win Forms app and Microsoft tried to make web development as simple as Win Forms or VB6. That’s how ASP.NET Web Forms was born.

Despite the fact Microsoft has moved their attention to ASP.NET MVC years ago, ASP.NET Web Forms is still very popular and there are many web apps using it. They needs to be maintained and extended with new features.

Microsoft has been working on .NET Core lately, but there are not plans to bring Web Forms onto this new platform. Of course, Web Forms is supported as long as .NET Framework is supported, so there is no need to panic, but still…

Everyone knows that it’s time to move on. But where to?


Rewrite? No way!

Imagine you are a Web Forms developer taking care of a 15 years old web app. The app probably doesn’t work perfectly in modern browsers, imposes a possible security risk as it uses outdated libraries, and the users complain every other day about the UX because the app needs to reload the entire page on every click. And “Hey, there was no JavaScript in 2002!" is not a really good explanation.

You suggest that the app should be rewritten on every meeting during last five years, but nobody listens or the answer is always the same - there is no time or money for that. You know that there is a huge technology debt, however the application must survive next 5 years and there is a backlog full of new features and screens to be delivered – and they need them yesterday.


Modernize!

If the application is 10 years old, there is no reasonable chance it could be rewritten in less than half of that time. And of course, no company can stop evolving its business critical app for 5 years while it is being rewritten.

Starting to build a new version in parallel with maintaining the old one can be a way to go, but it is very expensive. The development team needs to be doubled, and it requires a massive communication between both teams as there is typically a huge amount of know-how to be transferred. A lot of time is spent with studying how (or even why) the old code works because its original author now works somewhere else.

That’s why many companies are trying to modernize their solutions. In the ASP.NET Web Forms, there are some ways how to slowly migrate to a more modern stack – screen by screen, module by module. The path is long and sometimes painful, but it allows to keep introducing new features and extend the lifetime of the application, at least for couple of years.


Steps

There are several things you can do to modernize your old ASP.NET Web Forms applications. I will try to address these topics and decisions in the next parts of this series, but here is a quick overview:

You can use modern front-end framework for new modules of the application. The choice of the UI framework depends on the type of the application and also on the skills of the team. The new parts of the application can use a completely different UI stack, but if they use the same CSS, the users won’t notice. In the meantime, the old screens may be rewritten to the new technology one by one.

There are many improvements for the business layer that can be done. Of course, monolithic applications cannot be converted to microservices easily, however some parts of the business layer can often be extracted and maybe containerized. The business layer class libraries can be converted to .NET Standard, which will allow them to be consumed from .NET Core.

Moreover, SQL might not be the right store for everything. Most of the ASP.NET web applications store all their data in a SQL database. Sometimes, it is a good idea and the relational approach is necessary, however using another type of storage in some parts of the application might remove a lot of complexity. In addition, there are new laws and regulations concerning data privacy (GDPR for example). You should review which personal data you have and who can access them.

And remember that modernization is an opportunity for refactoring, cleanup and introducing new concepts. If you are not using dependency injection or automated tests, you should seriously think about starting with them now, at least for the new parts of the application.

Also, you should think about the overall architecture of the application. Maybe some parts can be moved in the cloud, replaced by something else and so on. If there is something that doesn’t scale, it should be the first thing to think about. Create a list of priorities and try to address the ambitious plans for next years in the design.


Next part: Modernizing ASP.NET Web Forms Applications (Part 2)

Today, I have been talking about the dependency injection in .NET Core at .NET Summit, a new .NET conference in Minsk, Belarus.

Here are my slides and demos

We have organized a small conference called The Future of ASP.NET last Friday.

Me and Michal Valasek have been playing with the idea to do it for quite a long time, because since the day Microsoft announced that ASP.NET Web Forms is not going to be supported on ASP.NET Core, we have got thousands of questions from many people and companies. There are many ways to build a web application and it is not easy to choose the right technology. Especially when you are a developer with strong .NET skills but little knowledge of JavaScript.

Fotka uživatele Dotnetcollege.

Fotka uživatele Dotnetcollege.

 

We had more than 100 attendees at the conference and 5 sessions in total. We started with an introduction to ASP.NET Core, MVC Core and Razor Pages. Then we had sessions about Angular and React, and I had the last session about DotVVM - an open source framework which I have started 3 years ago and which simplifies building line of business web apps.

Most of the attendees still have some web applications written in ASP.NET Web Forms. Many of these applications are more than 10 years old and it is almost impossible to rewrite them from scratch - the companies and businesses rely on them and rewriting these applications would take years.

 

We have got a lot of positive feedback about the conference, but I have also seen a lot of sad faces. I have talked with several attendees and the sessions made them realize how difficult it is to rewrite the application and possibly switch to Angular or React.

Not only that the dev team needs to learn a lot of new things - new languages and concepts (Typescript, how modules work, REST API), libraries and tools (Nodejs, npm, webpack) and things like how to deploy these applications. It often means a change of the architecture of the application (building a REST API which exposes the business logic) or a complete change of the mindset (especially when you are switching to React which is functional).

There are also a lot of stakeholders (customers, managers) that need to be convinced that rewrite is worth the effort (and actually, sometimes it is not true). Rewriting the entire application with 10 years of history cannot be done in a significantly shorter timeframe. And finally, it is difficult to deliver new features while the team rewrites the solution.

Of course there are also some benefits: getting rid of the technological debt, introducing micro service architecture or CI/CD which can lead to a better quality and faster release cycles, the ability to make fundamental changes in the data model to reflect changes in the business processes and so on. The company will become more attractive to the developers because of modern approaches and technologies. But it is really a challenge and there are a lot of risks to take care of.

 

Modernizing Legacy Applications

That’s why I decided to make a demo of integrating DotVVM with an old Web Forms application. A lot of people found this combination very interesting and it might be the right way that allows to slowly upgrade and modernize their old applications while keeping the old parts running and maintainable.

I have took the source code of DotNetPortal, a largest Czech website about .NET development I have created with my friend Tomas Jecha years ago. The app is written in ASP.NET Web Forms, uses Forms Authentication, hosts some WCF services and things like that.

I have replaced the Forms Authentication with OWIN Security libraries, which was the most difficult part actually, and I’ll publish a blog post soon about how to make this happen. Then I just installed DotVVM NuGet package in the project, added the DotvvmStartup class, and implemented a simple admin section. I have created a master page in DotVVM, copied all the contents from the Web Forms one and made only few changes to have the same looking master page in DotVVM part of the app. Because of the OWIN Security, I have a single sign on both parts of the website - the old one and the new one, and because of the same CSS files and same structure of the master page, the user won’t notice that there are multiple frameworks involved. And I can easily integrate with the old business layer without the need to expose it as a REST API, which would include a lot of work and refactoring and changes in the deployment process of the application.

In case of a real business application, this approach allows to build new parts of the application in DotVVM while keeping the rest of the application untouched. New parts of the application can be implemented in DotVVM which is more comfortable than writing them in Web Forms. The legacy parts can be maintained or rewritten one by one. Some of them may become obsolete over time and can be removed completely. The team can also work on refactoring and decoupling the business logic, and eventually, all the old parts may be replaced and the application can be ported to .NET Core and possibly containerized.

Of course, even this process can take years and includes a lot of challenges too, but it can be much safer way to adapt to the new platform.

In past few days, we have seen several samples of C# code compiled or interpreted using the WebAssembly: Blazor, for example.

I really like the idea of running C# code in the browser, and I immediately got an idea how to incorporate this mechanism in DotVVM. Running C# code on the client side could really change the way how .NET web applications are developed today, and I am sure that a lot of new front end frameworks will appear sooner or later.

 

Currently, there are two ways how to pack C# code so it can be executed in the browser. The easier (and slower) way is to interpret the actual MSIL code, which is done by DotNetAnywhere (used by Blazor) for example. The other way, which Mono is going, is to really compile the .NET assembly to WebAssembly. It will take some time until these technologies are mature enough, but the future seems pretty clear.

 

Imagine for a while, that your new web app has some C# code that runs in the browser. How would you access your data? Through some API, of course. And that’s where it gets uncomfortable.

No, I am not going to cry for ASP.NET Web Services which had the greatest tooling I have ever seen. Yes, it was a question of few clicks and there was really nothing to break, but there was a ton of disadvantages.

WCF was also great when it came to writing the services and call them. The number of features and possibilities was really impressive, but the configuration was a hell on wheels.

 

REST

Today, REST is the probably the most popular solution. If you go this way, you need to build your Web API which is quite simple.

But then, you need to configure Swagger and generate client-side code that can be used in the browser. This process is not very straight-forward - there is no magic button for that in Visual Studio. You need to install some Nuget packages, configure it, then you try to generate the API client in the Visual Studio which works only sometimes so you might need to use NSwag or other tool to do that. If the Web API is not your own, there might not be Swagger metadata available and you will need to find some hand-crafted library on GitHub (which will be most probably outdated and not maintained), or you will need just write the API client by yourself.

If the API is yours, it makes the things simpler, but still - you may want to regenerate your API clients in the CI process which is possible, but not easy to set up.

And finally, Swagger itself has a bunch of limitations. The generic types and polymorphism are difficult to do, if not impossible at all. The API versioning works somehow, but it feels like they forgot about it at the beginning.

And finally, there is no standard way to do paging or sorting of data - you will need to do everything by yourself.

 

Graph QL

You can try Graph QL as it is also very popular today. If you haven’t heard about it yet, it works like this: you send a “something-like-JSON without data” with the properties or child objects you will need, and the server will load the data in the object and send it to back to you. You can do filtering, paging and includes in it, and it is strongly typed which is nice. There are several .NET libraries which claim to support this protocol.

However, I have found very difficult to implement this kind of API on the server side. The user can basically ask for any set of the properties and child objects. If you use SQL database with Entity Framework to access the data, which is the most frequent case, you never know how will the query look like.

The user can ask for so many objects and generate so many Includes, that you won’t probably do it on a single SQL query. If the database is large, you should not permit the user to make any kind of Includes as it may kill the performance of the app and it is an easy way for a DOS attack. And there are so many combinations of stuff what the user can do, that you will spend hours and hours by deciding what to allow, what to restrict, whether to make a separate SQL query for some collections etc.

 

Other ways?

OData tries to solve a similar problem like Graph QL and it looks easier to implement on the server side, because it is more restrictive. But there are some issues too, and many people would tell you that it’s dead. The main issue can be the lack of good clients for non-.NET platforms, and you may run into similar issues like with Graph QL when you try to implement it on the server side.

 

Lot of ideas…

One of the things I like when I use DotVVM to build web apps is that I don’t have to build and maintain the API myself. The framework handles the server-client communication for me which is extremely helpful when the web app changes its UI and structure of the data frequently. Almost every such change would require changing the API, and if this API is used by anyone else, a mobile application for example, it creates an additional overhead with versioning of this API.

With DotVVM, I can just deploy a new version of the app with a different viewmodel and that’s it. If there is a mobile app, it has its own API, so changes of the web UI don’t require changes of the API. And provided the application is well structured, the API controller is a very tiny class that only calls methods from the business layer. And of course, the viewmodel calls similar or the same methods, so the business logic is shared by the mobile and web app.

 

If we decide to create a WebAssembly version of DotVVM, we should really focus on making the client-server communication simple. I don’t want to build my better Swagger, because it is a lot of work, but still - there must be an easier way.

I am really looking forward what new possibilities the WebAssembly unleashes. And I hope that new frameworks and tools will make things simpler, not more difficult.