Long time ago, I have learned that underestimating the architecture of a software project is much worse than over-engineering it. When the project evolves, there are always some new requirements which completely break some of the initial assumptions. Any robustness that can help with addressing larger changes can be very helpful and can save a lot of time and sleepless nights.

Even the tiniest, one-purpose scripts which generates some report can grow and grow, and suddenly it becomes a large system that runs a department of a global corporation. I have seen that so many times, and being lazy at the beginning of the project often means great issues in the future.

But you know, we have a deadline…

Years ago, I tried to design a shared infrastructure for projects we have been doing in my company. Our typical project was a web-based line of business application, with lots of CRUDs, but there were always a lot of custom complex workflows and integrations.

When you look at almost any demo of ASP.NET application with Entity Framework, you can see that they are accessing the DbContext right in the MVC controllers, and the EF entities are used even in the views (where they can do lazy loading and all sort of nasty things).

This approach works great for a simple demo, and it is actually great for the beginners as it doesn’t get too complicated. But it is a nightmare in any large-scale application, as we saw in many apps written by somebody else which we had to maintain later. It leads to a lot of performance issues (lazy loading everywhere), repetitive code snippets (copy pasting, find & replace programming) and inconsistencies caused by the fact you can access any table from any page in the application.

We have always strictly separated “data access layer” which contained Entity Framework stuff, a “busines layer” with a domain model that was mapped from the EF entities, and a “presentation layer” on the top.

The business layer was always the most fascinating one for me – there are always a lot of things inside, and everyone is doing it quite differently.

Repositories and Query Objects

For some people, repositories are anti-pattern. For some people, Entity Framework DbSets are your repositories and there is no need to implement another useless layer on top of them.

However, I have found this layer very useful, so we are used to implement a repository over DbSet (or any other database table or collection), for a dozen of reasons. This extra layer gives us a great flexibility when we need to implement features like soft delete or entity versioning, and helps greatly in integration tests as it allows nice mocking.

Unlike a lot of people, we don’t have the all-purpose GetAll() method that would return IQueryable in the repos. Our repositories can only Insert, Update, Delete and GetById. There are no methods which would return multiple records (other than returning a few entities by ID).

To query for multiple records, we use Query Objects. They are great to encapsulate complex queries and ensure that the queries are not repeated across the entire application.

If you allow to access IQueryable, you will soon have expressions like productsRepo.GetAll().Where(p => p.CategoryId == category) spawned everywhere in the project. If you want to add soft delete checks later, it is very difficult to find all usages of this table.

Thanks to extracting this logic in a query object, we can have a class named ProductsByCategoryQuery in the project, which can be used on all places in the application. If we hit into any performance problems with Entity Framework (which happens in case of complex queries), we can re-implement the query object to use raw SQL while keeping the rest of the application unchanged, and so on.

Also, the query objects give us an unified way to do sorting and paging, which helps with building generic CRUD mechanisms.

Facades and Services

The business logic of the application is implemented in facades and service classes. I use the term “facade” for classes which are used directly from the presentation layer – facades expose all functions the UI needs.
All other classes with some business logic are referred as services, and they can be grouped in quite a complex hierarchies, depend on each other etc.

These facades and services use repos and queries for implementing everything the business layer does.


In our case, the domain model of the application is typically a bunch of DTO (data transfer objects). Unlike most of the books says, I like the anemic model approach much more. I find very impractical to inject dependencies into domain objects.

Because most of the domain model objects are very similar to EF entities, we have adopted AutoMapper. It is a really great tool and I like its ProjectTo() method that can translate the mappings to IQueryable.

There are some caveats however. You should avoid nesting DTOs to prevent the queries to be super-complicated. We have also found out that it is a very good idea to have multiple DTOs for every entity. We have list DTOs which are used in grids and lists, we have detail DTOs for the edit forms, we have basic DTOs for filling the comboboxes and lot more. Thanks to AutoMapper, most of our mappings are just CreateMap<TEntity, TDTO>() with no additional configuration.

…and it’s Open Source

I have been showing the way how a business layer can be designed in many of my conference and user group talks. Our Riganti Utils Infrastructure library is developed on GitHub and anyone is welcome to use it, or to take inspiration from it.

The current version of the infrastructure is 2 and currently, we are in process of designing its third version.

In the next part of this article, I will try to address the issues with v2 and how to make the infrastructure easier to use.

We have organized a small conference called The Future of ASP.NET last Friday.

Me and Michal Valasek have been playing with the idea to do it for quite a long time, because since the day Microsoft announced that ASP.NET Web Forms is not going to be supported on ASP.NET Core, we have got thousands of questions from many people and companies. There are many ways to build a web application and it is not easy to choose the right technology. Especially when you are a developer with strong .NET skills but little knowledge of JavaScript.

Fotka uživatele Dotnetcollege.

Fotka uživatele Dotnetcollege.


We had more than 100 attendees at the conference and 5 sessions in total. We started with an introduction to ASP.NET Core, MVC Core and Razor Pages. Then we had sessions about Angular and React, and I had the last session about DotVVM - an open source framework which I have started 3 years ago and which simplifies building line of business web apps.

Most of the attendees still have some web applications written in ASP.NET Web Forms. Many of these applications are more than 10 years old and it is almost impossible to rewrite them from scratch - the companies and businesses rely on them and rewriting these applications would take years.


We have got a lot of positive feedback about the conference, but I have also seen a lot of sad faces. I have talked with several attendees and the sessions made them realize how difficult it is to rewrite the application and possibly switch to Angular or React.

Not only that the dev team needs to learn a lot of new things - new languages and concepts (Typescript, how modules work, REST API), libraries and tools (Nodejs, npm, webpack) and things like how to deploy these applications. It often means a change of the architecture of the application (building a REST API which exposes the business logic) or a complete change of the mindset (especially when you are switching to React which is functional).

There are also a lot of stakeholders (customers, managers) that need to be convinced that rewrite is worth the effort (and actually, sometimes it is not true). Rewriting the entire application with 10 years of history cannot be done in a significantly shorter timeframe. And finally, it is difficult to deliver new features while the team rewrites the solution.

Of course there are also some benefits: getting rid of the technological debt, introducing micro service architecture or CI/CD which can lead to a better quality and faster release cycles, the ability to make fundamental changes in the data model to reflect changes in the business processes and so on. The company will become more attractive to the developers because of modern approaches and technologies. But it is really a challenge and there are a lot of risks to take care of.


Modernizing Legacy Applications

That’s why I decided to make a demo of integrating DotVVM with an old Web Forms application. A lot of people found this combination very interesting and it might be the right way that allows to slowly upgrade and modernize their old applications while keeping the old parts running and maintainable.

I have took the source code of DotNetPortal, a largest Czech website about .NET development I have created with my friend Tomas Jecha years ago. The app is written in ASP.NET Web Forms, uses Forms Authentication, hosts some WCF services and things like that.

I have replaced the Forms Authentication with OWIN Security libraries, which was the most difficult part actually, and I’ll publish a blog post soon about how to make this happen. Then I just installed DotVVM NuGet package in the project, added the DotvvmStartup class, and implemented a simple admin section. I have created a master page in DotVVM, copied all the contents from the Web Forms one and made only few changes to have the same looking master page in DotVVM part of the app. Because of the OWIN Security, I have a single sign on both parts of the website - the old one and the new one, and because of the same CSS files and same structure of the master page, the user won’t notice that there are multiple frameworks involved. And I can easily integrate with the old business layer without the need to expose it as a REST API, which would include a lot of work and refactoring and changes in the deployment process of the application.

In case of a real business application, this approach allows to build new parts of the application in DotVVM while keeping the rest of the application untouched. New parts of the application can be implemented in DotVVM which is more comfortable than writing them in Web Forms. The legacy parts can be maintained or rewritten one by one. Some of them may become obsolete over time and can be removed completely. The team can also work on refactoring and decoupling the business logic, and eventually, all the old parts may be replaced and the application can be ported to .NET Core and possibly containerized.

Of course, even this process can take years and includes a lot of challenges too, but it can be much safer way to adapt to the new platform.

In past few days, we have seen several samples of C# code compiled or interpreted using the WebAssembly: Blazor, for example.

I really like the idea of running C# code in the browser, and I immediately got an idea how to incorporate this mechanism in DotVVM. Running C# code on the client side could really change the way how .NET web applications are developed today, and I am sure that a lot of new front end frameworks will appear sooner or later.


Currently, there are two ways how to pack C# code so it can be executed in the browser. The easier (and slower) way is to interpret the actual MSIL code, which is done by DotNetAnywhere (used by Blazor) for example. The other way, which Mono is going, is to really compile the .NET assembly to WebAssembly. It will take some time until these technologies are mature enough, but the future seems pretty clear.


Imagine for a while, that your new web app has some C# code that runs in the browser. How would you access your data? Through some API, of course. And that’s where it gets uncomfortable.

No, I am not going to cry for ASP.NET Web Services which had the greatest tooling I have ever seen. Yes, it was a question of few clicks and there was really nothing to break, but there was a ton of disadvantages.

WCF was also great when it came to writing the services and call them. The number of features and possibilities was really impressive, but the configuration was a hell on wheels.



Today, REST is the probably the most popular solution. If you go this way, you need to build your Web API which is quite simple.

But then, you need to configure Swagger and generate client-side code that can be used in the browser. This process is not very straight-forward - there is no magic button for that in Visual Studio. You need to install some Nuget packages, configure it, then you try to generate the API client in the Visual Studio which works only sometimes so you might need to use NSwag or other tool to do that. If the Web API is not your own, there might not be Swagger metadata available and you will need to find some hand-crafted library on GitHub (which will be most probably outdated and not maintained), or you will need just write the API client by yourself.

If the API is yours, it makes the things simpler, but still - you may want to regenerate your API clients in the CI process which is possible, but not easy to set up.

And finally, Swagger itself has a bunch of limitations. The generic types and polymorphism are difficult to do, if not impossible at all. The API versioning works somehow, but it feels like they forgot about it at the beginning.

And finally, there is no standard way to do paging or sorting of data - you will need to do everything by yourself.


Graph QL

You can try Graph QL as it is also very popular today. If you haven’t heard about it yet, it works like this: you send a “something-like-JSON without data” with the properties or child objects you will need, and the server will load the data in the object and send it to back to you. You can do filtering, paging and includes in it, and it is strongly typed which is nice. There are several .NET libraries which claim to support this protocol.

However, I have found very difficult to implement this kind of API on the server side. The user can basically ask for any set of the properties and child objects. If you use SQL database with Entity Framework to access the data, which is the most frequent case, you never know how will the query look like.

The user can ask for so many objects and generate so many Includes, that you won’t probably do it on a single SQL query. If the database is large, you should not permit the user to make any kind of Includes as it may kill the performance of the app and it is an easy way for a DOS attack. And there are so many combinations of stuff what the user can do, that you will spend hours and hours by deciding what to allow, what to restrict, whether to make a separate SQL query for some collections etc.


Other ways?

OData tries to solve a similar problem like Graph QL and it looks easier to implement on the server side, because it is more restrictive. But there are some issues too, and many people would tell you that it’s dead. The main issue can be the lack of good clients for non-.NET platforms, and you may run into similar issues like with Graph QL when you try to implement it on the server side.


Lot of ideas…

One of the things I like when I use DotVVM to build web apps is that I don’t have to build and maintain the API myself. The framework handles the server-client communication for me which is extremely helpful when the web app changes its UI and structure of the data frequently. Almost every such change would require changing the API, and if this API is used by anyone else, a mobile application for example, it creates an additional overhead with versioning of this API.

With DotVVM, I can just deploy a new version of the app with a different viewmodel and that’s it. If there is a mobile app, it has its own API, so changes of the web UI don’t require changes of the API. And provided the application is well structured, the API controller is a very tiny class that only calls methods from the business layer. And of course, the viewmodel calls similar or the same methods, so the business logic is shared by the mobile and web app.


If we decide to create a WebAssembly version of DotVVM, we should really focus on making the client-server communication simple. I don’t want to build my better Swagger, because it is a lot of work, but still - there must be an easier way.

I am really looking forward what new possibilities the WebAssembly unleashes. And I hope that new frameworks and tools will make things simpler, not more difficult. 

I have been blogging since 2007 on a Czech website called DotNetPortal. I have written a lot of articles about .NET and web development and also many articles with personal opinions on various tech stuff.

Recently, I have started speaking at conferences abroad and met many people who started following me on Twitter. That’s why I have started tweeting in English, and now I’d like to write something occasionally, so I have deployed Mads Kristensen’s MiniBlog (which is a very nice project btw).


Who am I?

I am 29-years old guy from the Czech Republic. 6 years ago, I have started RIGANTI, a small software company which builds custom line of business apps for many customers from the Czech Republic, Great Britain and US. I have also been publishing articles, talking at many user groups and local conferences. I also teach programming and do in-house courses in various companies in the Czech Republic.

I got the MVP award 8 years ago and recently I became Microsoft Regional Director.

3 years ago I have started working on my biggest project - an open source MVVM framework called DotVVM. Soon enough, the project was too big to maintain for one person, so I involved other people from my company. Now, DotVVM is used by dozens of companies from the Czech Republic and other countries.

If I had a free time, I would have played the piano or golf, or finished my model train landscape. But that’s not going to happen anytime soon because I am super busy with so many interesting things at work.


The Goal

I will write about various .NET stuff I run into.

You can also follow me on Twitter. I don’t produce many tweets of my own, but I watch a lot of accounts and aggregate interesting stuff from the .NET and Microsoft world.