Few weeks ago, I got an idea to implement an interesting feature in DotVVM – the Server-side viewmodel caching. It can have a huge impact on a performance of DotVVM applications as it can reduce the data transferred on postback to almost nothing.

Intro – the basic principles of DotVVM

The idea behind DotVVM was quite simple – we want to use MVVM for building web applications, and we want to write in C#.

That’s why the viewmodel in DotVVM is a C# class and lives on the server where .NET runtime is. In order to have client-side interactivity in the web page, the viewmodel needs to operate on the client-side. Therefore, DotVVM serializes the viewmodel into JSON and includes it with the page HTML.

When the page is loaded, the client-side library of DotVVM will parse the JSON and create a Knockout JS instance of the viewmodel. Thanks to this, the DotVVM controls can use the Knockout data-bind attributes to offer their functionality. DotVVM just translates <dot:TextBox> to <input data-bind=”…” /> to make it working.

When the user decides to click a button, there is a method that needs to be called. However, this method lies on the server. DotVVM has to take the Knockout JS viewodel, serialize it and send it to the server, where it is deserialized so the method has the all the data and state that it needs to run. After the method completes, the viewmodel is serialized again and sent to the client where it is applied to the Knockout JS instance of the viewmodel and all controls in the page are updated.

An entire viewmodel is sent to the server

Changes made on the server are sent to the client

The entire process involves transferring the viewmodel from the server to the client and back. The response to the postback is efficient in general as it doesn’t need to transfer the entire viewmodel. The server compares the current viewmodel with the version received from the client, and sends only the changes.

But because of the stateless nature of DotVVM, the client has to send the entire viewmodel to the server. Or had, to be precise, because this now changes with the Server-side viewmodel caching.

DotVVM offers several mechanisms to prevent the entire viewmodel to be transferred:

  • The Bind attribute can specify the direction in which the data will be transferred.
  • The Static Commands allow to call a method, pass it any arguments and update the viewmodel with the result returned from the call.
  • REST API bindings can load additional data from a REST API which are not considered as a part of the viewmodel and therefore are not transferred on postbacks.

However, each method has some limitations and is more difficult to use. The comfort of using Command Binding which triggers a full postback is very tempting.

What about storing the viewmodel on the server?

The reason for sending the entire viewmodel on the server is simple – the viewmodel is not stored anywhere. When the server completes the HTTP request and sends the viewmodel to the client, it forgets about it immediately.

The server-side caching feature will change this behavior – the viewmodel will be kept on the server (in a JSON-serialized form, so the live object with dependencies to various application services could be garbage-collected) and the client will send only the diff on postback.

Only the changes are sent to the server, the rest is retrieved from viewmodel cache

Storing the viewmodel on the server introduces several challenges:

  • It will require more server resources. The viewmodels are not large in general (the average size is 1-15 kB based on the complexness of the page) and they can be compressed thanks to their text-based nature.
  • It can make DOS attacks easier it an attacker finds a way to exhaust server resources.
  • When the application is hosted on a web farm, the cache must be distributed.
  • What about cache synchronization in case of multiple postbacks?
  • Is the cache worth the troubles at all?

During our use of DotVVM on customer projects, we have made several observations:

  • When DotVVM is used on a public-facing websites, the viewmodels are tiny and mostly static. It is very frequent that all HTTP GET requests have the same viewmodel and it changes only then the user interacts with the page (e.g. enters a value in a textbox).
  • When DotVVM is used in line of business applications with many GridView controls, the most of the viewmodel is occupied by the contents of the GridView. If the user doesn’t use the inline edit functionality, the GridView is read-only and there is not much value in transferring its contents back to the server – the server can retrieve the most current data from the database.

It is obvious that the server-side caching will not help much in the first case, however it will help a lot in the second case.

Imagine a page with a GridView control with many rows. Each row will contain a button that can delete the particular row.

The viewmodel will contain a collection of objects representing each row. The data are read-only and thus cannot change. When the delete button is clicked, the viewmodel doesn’t need to be transferred to the server at all – we have saved almost 100%.

There is still some metadata that need to be transferred, like the cached viewmodel ID, CSRF token, and also the encrypted and protected values are excluded from the caching mechanism. But this data are relatively small in comparison to the GridView data.

Even if the user decides to use the inline editing functionality and updates a particular row, only the changes made in the viewmodel will be transferred. If there was 50 rows and one was changed, we can save about 98% of the data.

The viewmodels have 1 to 15kBs in average, so it’s not such a big deal, but still, when you multiply it by the number of concurrent users, or consider the users using a cellular networks, the difference can be quite significant.

Deduplication of cache entries

The observation for public-facing websites mentioned in the previous section brings another challenge – imagine there are thousands of users visiting the website. Most of them will leave immediately without making any postback, or they will just browse a few pages without any other of interaction that would trigger the postback.

As was mentioned before, the viewmodels in this case can be static. They will contain a few values that are used by the page, but their values will be the same when the page is loaded.

Imagine a page with a contact form. The viewmodel will contain properties for the subject, message contents and reply e-mail address, but they will be empty unless the user change them.

That’s why we’ve decided to use a hash of the viewmodel as the cache key. These pages will not exhaust the cache with thousands of equal entries because they will get the same key. This will allow to have just one cache entry for each page that will be shared between all its users (unless they change something and make a postback).

The encrypted and protected values are excluded from the caching mechanism, so it should not bring any security issue. When the user changes the viewmodel, it will get a different hash and will be stored in a separate cache entry.

Can the cache entry expire?

Of course it can. Most of us have probably had issues with expired sessions. But thankfully, this will not be the case of DotVVM. We always have the most current viewmodel on the client, so when the postback is sent and the server cannot find the viewmodel in cache, it will respond that there is a cache miss. In this case, DotVVM will automatically make an additional (full) postback sending the entire viewmodel to the server. Unless the authentication cookie is still valid, the postback will be performed – it will be just a little slower than usual.

The problem is now reduced in fine-tuning the cache settings – choosing a good compromise between the lifetime of the cached viewmodels and the cache size (and a proper storage – it may not be efficient to store the data in-memory).

It will take a lot of measurements and probably creating some tools which can help with making informed decisions on how to set up the cache correctly.

Can I try it now?

Not yet, but very soon. I have just added a new API for turning on experimental features. But in the next preview release of DotVVM, there will be an option to turn this feature on globally, or only for a specific pages.

Recently, I have written a series of articles on modernizing ASP.NET Web Forms apps. Now this topic became even more important thanks to the recent announcement of .NET 5. It was made clear that ASP.NET Web Forms will not be ported to .NET Core.

TLDR: DotVVM can run side by side with ASP.NET Web Forms, and it also supports .NET core. You can install it in your Web Forms project and start rewriting individual pages from Web Forms to DotVVM (similar syntax and controls with MVVM approach) while still being able to add new features or deploy the app to the server. After all the Web Forms code is gone, you can just switch the project to .NET Core. See the sample repo.

To rewrite or continuously upgrade

There are still plenty of ASP.NET Web Forms applications out in the world and their authors now stand by a difficult decision:

  • Throwing the old application away and rewrite it from scratch using modern stack
  • Trying to continously modernize the app and rewrite all the pages on-the-fly

The first option – total rewrite – is very time consuming. If the original application was developed for more 10 years, which is not uncommon, I can hardly imagine that it can be rewritten it in less than half of that time. In addition, when the application needs to support company daily tasks and workflows while responding to rapidly changing business needs, it is impossible to stop adding new features for months or even years because of the rewrite.

Of course, the company can build a new team that will develop the new version while keeping the old team maintaining and extending the old app, but it means double effort and a vast amount of time required to transfer the domain knowledge from the old team to the new one. Also, many things will need to be done twice, and it will probably take years until the new version is ready for production.

And finally, the management never likes to hear about rewriting the software from scratch. I have seen many situations where the project leads had to fight very hard in order to justify such decision.

The second option – the continuos modernization – looks a little bit easier. Imagine you have a Web Forms application with hundreds of ASPX pages. If you can rewrite one page per day using other technology and integrate the new pages with the old ones so the user won’t notice they are made with different stacks, after several months you can get rid of all of the ASPX pages and stay with a more modern solution. It may not be perfect as there will still be some legacy code, but it is much better than nothing, and if you are lucky and don’t use WCF or Workflow Foundation which are also not supported on .NET Core, you will be able to move the project to .NET Core.

Two projects? Possible, but maybe more difficult than it has to be.

But how to do it? Let’s suppose we have an old app that needs to be maintained.

Shall we create a new ASP.NET Core project that would run side by side, maybe on the same domain, and make links from the old to new pages and vice versa?

It can work if the same CSS styles are used. The user should not be able to tell that he actually uses two web apps.

However, there may be some issues with sign-on as the new app can use different authentication cookies than the old one – the authentication will need to be integrated somehow. Also, if session is used (which is not a good idea in general, but it is also quite frequent), it will not be shared between the two applications.

Moreover, this will require some configuration changes on the server, and the deployment model will need to be changed as you will now deploy two applications instead of one.

If the application caches some data in memory, you may also run into various concurrency issues as the caches will need to be invalidated. There will also be some duplication if the business layer is not properly separated from the UI.

What is more, if you decide to use Angular, React or other JavaScript framework, there is also a large amount of knowledge required to start working with these technologies. The business logic and data will have to be exposed through a REST API (or Graph QL), which may be an additional effort to set up at the beginning.

DotVVM can make this simpler

What if there is a framework that can be run side by side with ASP.NET Web Forms in one application, but works also with the newest versions of ASP.NET Core?

It would make so many things easier. You will have just one project to deploy. There will be no changes in the deployment model – it will still be an ASP.NET application. You won’t need to take care about sharing the user identity between two apps because there won’t be two apps.

With DotVVM, it is quite easy. It was one of our initial thoughts that lead us to start with the project. If you haven’t heard of it – it is an open source MVVM framework supporting both ASP.NET and ASP.NET Core. It has nice Visual Studio integration and recently joined the .NET Foundation.

How does the migration work?

You can install DotVVM NuGet package in the Web Forms application and it will run side by side with the ASPX pages that are in the project.

From the deployment perspective, there are no changes – it is still the same ASP.NET application that gets deployed to the server as usual.

You can start with copying the Web Forms master page and converting it in the DotVVM syntax. It is different, but not much – most of the controls have the same names, except that you are using the MVVM approach. Use the same CSS so the users won’t notice the change.

Then, you can start rewriting all the pages one by one from Web Forms to DotVVM. DotVVM contains similar controls like GridView, Repeater, FileUpload and more. The most difficult part will be extracting the business logic from the ASPX code behind to the DotVVM viewmodel, but it is still C#.

If your business layer was propely separated, it should be trivial. If not, take this as an opportunity to do the refactoring and get the cleaner code. Thanks to the MVVM approach, your viewmodels will be testable and the overall quality of the application will greatly improve.

DotVVM pages will share the environment with the ASP.NET ones, including the current user identity. You won’t need to expose your business logic through a REST API, you can keep the same code interacting with the database.

At each point of the process, the application works, can be extended with new features, and can be deployed. The team is not locked to the migration and can do other things simultaneously.

After a few months, when all the ASPX pages are rewritten in DotVVM, you will be able to create a new ASP.NET Core project, move all DotVVM pages and viewmodels into it and use them with .NET Core. The syntax of DotVVM is the same on both platforms.

If you have been using Forms Authentication in Web Forms, you will need to switch it to ASP.NET Core Cookies, but that should be an easy-enough change.

Are there any samples?

Yes, I have created a GitHub repo which describes the process in detail. There are five branches, each one displaying one of the steps.

In the first one, there is a simple ASP.NET Web Forms application. In the last one, there is the same app rewritten in DotVVM and running on .NET Core.

We have used this way to migrate several Web Forms applications. If the business layer is separated properly, rewrite on one page takes about 1 hour in average. If the business logic is tighen up with the UI, it can take significantly more time, but it can be a way to improving the application testability and I think it is worth – even poorly written apps can be saved using this way.

What if I need help?

We’ll be happy to help you. You can also contact the DotVVM team on our Gitter chat. Check out the DotVVM documentation and the cheat-sheet of differences between Web Forms and DotVVM.

Recently, I have been doing a few live streams with Michal Altair Valasek, fellow MVP from the Czech Republic. We took his AskMe demo app which shows how to build a non-trivial web app in ASP.NET Core, and made a version built in DotVVM.

These streams were in Czech language, but I got some requests to make live streams in English. So this time, I will be streaming in English, and I will try to fix some issues in DotVVM and bring a few new features in the framework.

The stream will be on my personal Twitch on Thursday 4/4/2019 at 7:30 PM CEST.

Watch TomasHerceg's live video on www.twitch.tv

It has been a long time since I discovered this nice Reddit thread about Metric vs Imperial system. There was an amazing comment listing all kinds of ounces, barrels, gallons and other crazy stuff, but recently it got deleted.

Thanks to Removeddit, a site that keeps deleted Reddit comments, I succeeded in recovering it. I believe there was a reason for deleting the comment by its author, and I hope that the author of this brilliant text wouldn’t mind - I just have to post it here so I can read it and laugh again and again:

There are four different ounces in use:

  • A Troy ounce is about 31.1 grams.

  • An Avoirdupois ounce is about 28.3 grams.

  • An Imperial fluid ounce is about 28.4 ml.

  • A US fluid ounce is about 29.57 ml.

This is related to the fact that a US fluid ounce is 1/16 of a US pint, while an Imperial fluid ounce is 1/20 of an Imperial pint, and an Imperial pint and a US pint are different. There are in fact three pints:

  • An imperial pint about is 568 ml, or 20 Imperial fluid ounces.

  • A US pint is about 473 ml, or 16 US fluid ounces.

  • A dry pint is about 551 ml, or XXX dry fluid ounces... no, wait a "dry fluid ounce" doesn't exist, I wonder why.

This is in turn related to the fact that there are three gallons:

  • The Imperial gallon is defined in metric terms as exactly 4.54609 litres. It contains 8 Imperial pints.

  • The US gallon is defined as 231 cubic inches. It contains 8 US pints.

  • The dry gallon is defined as 1/8 of a US bushel. It contains 8 dry pints.

However, there are only two types of bushels:

  • The imperial bushel, equal to 8 imperial gallons.

  • The US dry bushel, equal to 8 US dry gallons.

There is no such thing as a US (non-dry) bushel, so if you want to convert a US gallon into the next higher US unit of volume, you have to use the beautiful correspondence:

  • 1 US bushel = 9.30918 US gallons

The next higher up unit for measuring units is the barrel (thanks /u/AML86 for the reminder). There are at least ten different units called a barrel. Among these we will mention:

  • A dry barrel is 7056 cubic inches, which converts to a convenient ~3.28 dry bushels.

  • A barrel for cranberries (yes, really) is 5826 cubic inches , more or less ~2.71 bushels.

  • An Oil barrel is 42 US gallons.

I have been speaking at various conferences and events for more than 10 years. Over the years, I tried several ways of recording my sessions. Most of the ways were just OK, but they were not 100% reliable, and it always bothered me when I did a conference with amazing sessions and some of them fail to record.

Recently, I have built my custom device that can do the entire recording or live-streaming job done without interfering with anything I want to present, and can be used on conferences where devices are changed frequently and each has a completely different setup.

But first, let me briefly list the ways I have been using, and point out their pros and cons.

Camtasia

Camtasia is probably the best software for screen recording. It comes with a simple editor where you can do basic processing of the video, and export it to common formats.

I was using Camtasia for a couple of years succesfully, however there are some things you’ll want to change in Camtasia settings, otherwise your recording can be easily ruined:

  1. The default keyboard shortcuts in Camtasia collide with some shortcuts in Visual Studio. For example, when you debug something during your presentation and press F10, Camtasia will stop the recording. If you don’t notice that, you have just lost the rest of your session.
  2. Camtasia stores the recordings to you C drive by default. If you have multiple drives (C for system and D for data), you may want to put it to some other location so you won’t run out of disk space. It is a good idea to place Camtasia temp folder to a USB stick (USB 3.0) so you’ll get full performance from your drive for your Visual Studio builds, virtualization or any other stuff you use in your presentation.
  3. Double-test your microphone settings so you don’t lose your audio. Recording the sound on a separate backup device is always a good idea. On some PCs, I have not been successful in recording the sound from Camtasia at all – sometimes I have been hitting various driver issues, or there was a problem when two audio devices on the PC used the same name.

The main issue with Camtasia is that it drains the performance of your machine while you are presenting. It is not a problem if you only have PowerPoint slides, but if you need to use Visual Studio, Docker and other stuff during the session, your machine will be much slower than without recording. Two or three times, my laptop got overheated and just turned off in the middle of my presentation. The recording was of course lost completely as the temp file was corrupt.

If you organize a conference and want to use Camtasia, you will need to convince all speakers to install it, and I totally understand the speakers who won’t do it. It is always risky to install anything before the presentation, especially when you don’t know the software. It can interfere with the things you want to show, and finally, there is never enough time to do it before the presentation and test it properly – everyone wants to focus on the session and not spend time by installing something. 

AverMedia Frame Grabbers

Over the years, I have also tried several frame grabbers from AverMedia, namely AverMedia ExtremeCap 910 and AverMedia Game Capture HD II.

The first one can record VGA or HDMI on a SD card, the second one records HDMI on its internal hard drive. Be careful about the SD card you want to plug in – not all SD cards worked for me.

The advantage is that your device is not affected by the recording at all, and it is simple to use – just plug the frame grabber between your laptop and projector.

On the other hand, there are several problems with this approach:

  1. You don’t know if the recording succeeded unless the presentation ends, and you check what has been recorded. Sometimes you’ll find an empty file, a hour-long video of entirely black screen, or just a part of the session.
  2. When the speaker changes the screen resolution during the presentation, or switches from Duplicate mode to Extend, the recording won’t probably survive this and ends or produce a corrupted file.
  3. The quality of audio recorded by the frame grabbers is terrible. You need a separate audio recording device, or a good external microphone that will be connected to the grabber.
  4. You’ll need to do post processing as the grabbers often don’t produce a video in a format which is suitable for direct upload.

All in all, this method failed me many times. I hardly remember a conference when 100% of recordings succeeded using this method.

On the other hand, my colleagues from Windows User Group are using this way for years and have successfully recorded hundreds of sessions.


My Custom Device

Few years ago, one of my colleagues was doing some live streams with OBS, and that inspired me to build my own device for recording or streaming.

Of course, I could install OBS directly on my laptop, but that wouldn’t work on conferences which I organize (convincing the speakers to install and configure OBS on their machines).

Instead, I built my device from the following parts:

  • Intel Nuc that runs Windows 10 and OBS. I got the version with Core i5 processor and put there 256GB SSD drive so I have enough space for the recordings.
  • Elgato Game Capture HD60 that can grab HDMI signal and behaves like a USB 3.0 webcam.
  • Zoom H1N for recording the sound. It is a good-quality dictaphone that can record audion on a micro-SD card, or behave like a USB microphone (that’s what I need).
  • Logitech C922 Pro Stream USB webcam to record the speaker.
  • Asus VT168H 15.6’’ LCD touch screen.

I have mounted the Intel Nuc on the back of the LCD display. Thanks to the fact that it is a touch screen, I don’t need mouse and keyboard connected.

I can then just connect the webcam, the grabber and the Zoom recorder in the USB ports (there is enough of them) and use OBS to record or stream.

I have tested the Elgato grabber that it survives when the screen resolution changes or Duplicate mode switches to Extend and vice versa. It survives even disconnecting and re-connection of any device, and I can always check what's being recorded or streamed in real-time thanks to the display.

The entire setup costs about $900, but it is the most reliable and flexible solution I have found so far.

Elgato grabber

Logitech webcam

Zoom recorder

Intel Nuc mounted on the LCD

Since the webcam can be placed few meters away from the speaker’s post, I also recommend connecting it using an active USB 3.0 extension cable.

Here is a photo of my studio I have built in our offices. It is ready for the speaker to just sit down, plug the HDMI cable in their laptop, and start presenting.

file2

file-9

The software

As I already mentioned, I am using OBS. It is an awesome open source project, and it proved to be very reliable.

You can define as many scenes as you want, and add any kinds of video or audio sources in every scene.

I have three scenes – just the screen, just the speaker or a combined view (I am using it most often). Also, for the live streams, I have a static scene for intro, intermission and outro.

OBS with Camera Only scene

OBS with Combo scene

You can switch between the scenes using the touch screen. One of my friends is building a simple device with several buttons to switch the scenes and start/stop recording – it will be more accurate than the touch screen.

For streaming, you just need to enter the stream key in the Settings of OBS. OBS supports many streaming services. I stream on Twitch and then export the videos to YouTube (I often edit them a little bit).

When I just record, the videos are stored on the internal SSD drive I put in the Intel Nuc.

Post-processing

The videos recorded in OBS comes in FLV format. It can be changed, the reason for FLV is a support for missed frames which can happen, especially in streaming scenarios.

I use Adobe Premiere to edit my recordings, which doesn’t support FLV, but it is simple to convert FLV into MP4 using ffmpeg.

ffmpeg -i myvideo.flv -codec copy myvideo.mp4

The conversion is very quick and it is loseless – the stream itself is not affected, only the container is changed from FLV to MP4.

Conclusion

We have recently started coding live with my friend Michal Altair Valasek, and had a great fun. This device helped us to do the stream right away without spending much time to prepare it, and I have used the device to record the sessions on our latest conference. We had some issues and one of the sessions failed to record because of a human error, but next time we’ll be more careful.

I believe that I have finally found a reliable and still affordable way to record or stream sessions from conferences I organize.