We have organized a small conference called The Future of ASP.NET last Friday.

Me and Michal Valasek have been playing with the idea to do it for quite a long time, because since the day Microsoft announced that ASP.NET Web Forms is not going to be supported on ASP.NET Core, we have got thousands of questions from many people and companies. There are many ways to build a web application and it is not easy to choose the right technology. Especially when you are a developer with strong .NET skills but little knowledge of JavaScript.

Fotka uživatele Dotnetcollege.

Fotka uživatele Dotnetcollege.

 

We had more than 100 attendees at the conference and 5 sessions in total. We started with an introduction to ASP.NET Core, MVC Core and Razor Pages. Then we had sessions about Angular and React, and I had the last session about DotVVM - an open source framework which I have started 3 years ago and which simplifies building line of business web apps.

Most of the attendees still have some web applications written in ASP.NET Web Forms. Many of these applications are more than 10 years old and it is almost impossible to rewrite them from scratch - the companies and businesses rely on them and rewriting these applications would take years.

 

We have got a lot of positive feedback about the conference, but I have also seen a lot of sad faces. I have talked with several attendees and the sessions made them realize how difficult it is to rewrite the application and possibly switch to Angular or React.

Not only that the dev team needs to learn a lot of new things - new languages and concepts (Typescript, how modules work, REST API), libraries and tools (Nodejs, npm, webpack) and things like how to deploy these applications. It often means a change of the architecture of the application (building a REST API which exposes the business logic) or a complete change of the mindset (especially when you are switching to React which is functional).

There are also a lot of stakeholders (customers, managers) that need to be convinced that rewrite is worth the effort (and actually, sometimes it is not true). Rewriting the entire application with 10 years of history cannot be done in a significantly shorter timeframe. And finally, it is difficult to deliver new features while the team rewrites the solution.

Of course there are also some benefits: getting rid of the technological debt, introducing micro service architecture or CI/CD which can lead to a better quality and faster release cycles, the ability to make fundamental changes in the data model to reflect changes in the business processes and so on. The company will become more attractive to the developers because of modern approaches and technologies. But it is really a challenge and there are a lot of risks to take care of.

 

Modernizing Legacy Applications

That’s why I decided to make a demo of integrating DotVVM with an old Web Forms application. A lot of people found this combination very interesting and it might be the right way that allows to slowly upgrade and modernize their old applications while keeping the old parts running and maintainable.

I have took the source code of DotNetPortal, a largest Czech website about .NET development I have created with my friend Tomas Jecha years ago. The app is written in ASP.NET Web Forms, uses Forms Authentication, hosts some WCF services and things like that.

I have replaced the Forms Authentication with OWIN Security libraries, which was the most difficult part actually, and I’ll publish a blog post soon about how to make this happen. Then I just installed DotVVM NuGet package in the project, added the DotvvmStartup class, and implemented a simple admin section. I have created a master page in DotVVM, copied all the contents from the Web Forms one and made only few changes to have the same looking master page in DotVVM part of the app. Because of the OWIN Security, I have a single sign on both parts of the website - the old one and the new one, and because of the same CSS files and same structure of the master page, the user won’t notice that there are multiple frameworks involved. And I can easily integrate with the old business layer without the need to expose it as a REST API, which would include a lot of work and refactoring and changes in the deployment process of the application.

In case of a real business application, this approach allows to build new parts of the application in DotVVM while keeping the rest of the application untouched. New parts of the application can be implemented in DotVVM which is more comfortable than writing them in Web Forms. The legacy parts can be maintained or rewritten one by one. Some of them may become obsolete over time and can be removed completely. The team can also work on refactoring and decoupling the business logic, and eventually, all the old parts may be replaced and the application can be ported to .NET Core and possibly containerized.

Of course, even this process can take years and includes a lot of challenges too, but it can be much safer way to adapt to the new platform.

In past few days, we have seen several samples of C# code compiled or interpreted using the WebAssembly: Blazor, for example.

I really like the idea of running C# code in the browser, and I immediately got an idea how to incorporate this mechanism in DotVVM. Running C# code on the client side could really change the way how .NET web applications are developed today, and I am sure that a lot of new front end frameworks will appear sooner or later.

 

Currently, there are two ways how to pack C# code so it can be executed in the browser. The easier (and slower) way is to interpret the actual MSIL code, which is done by DotNetAnywhere (used by Blazor) for example. The other way, which Mono is going, is to really compile the .NET assembly to WebAssembly. It will take some time until these technologies are mature enough, but the future seems pretty clear.

 

Imagine for a while, that your new web app has some C# code that runs in the browser. How would you access your data? Through some API, of course. And that’s where it gets uncomfortable.

No, I am not going to cry for ASP.NET Web Services which had the greatest tooling I have ever seen. Yes, it was a question of few clicks and there was really nothing to break, but there was a ton of disadvantages.

WCF was also great when it came to writing the services and call them. The number of features and possibilities was really impressive, but the configuration was a hell on wheels.

 

REST

Today, REST is the probably the most popular solution. If you go this way, you need to build your Web API which is quite simple.

But then, you need to configure Swagger and generate client-side code that can be used in the browser. This process is not very straight-forward - there is no magic button for that in Visual Studio. You need to install some Nuget packages, configure it, then you try to generate the API client in the Visual Studio which works only sometimes so you might need to use NSwag or other tool to do that. If the Web API is not your own, there might not be Swagger metadata available and you will need to find some hand-crafted library on GitHub (which will be most probably outdated and not maintained), or you will need just write the API client by yourself.

If the API is yours, it makes the things simpler, but still - you may want to regenerate your API clients in the CI process which is possible, but not easy to set up.

And finally, Swagger itself has a bunch of limitations. The generic types and polymorphism are difficult to do, if not impossible at all. The API versioning works somehow, but it feels like they forgot about it at the beginning.

And finally, there is no standard way to do paging or sorting of data - you will need to do everything by yourself.

 

Graph QL

You can try Graph QL as it is also very popular today. If you haven’t heard about it yet, it works like this: you send a “something-like-JSON without data” with the properties or child objects you will need, and the server will load the data in the object and send it to back to you. You can do filtering, paging and includes in it, and it is strongly typed which is nice. There are several .NET libraries which claim to support this protocol.

However, I have found very difficult to implement this kind of API on the server side. The user can basically ask for any set of the properties and child objects. If you use SQL database with Entity Framework to access the data, which is the most frequent case, you never know how will the query look like.

The user can ask for so many objects and generate so many Includes, that you won’t probably do it on a single SQL query. If the database is large, you should not permit the user to make any kind of Includes as it may kill the performance of the app and it is an easy way for a DOS attack. And there are so many combinations of stuff what the user can do, that you will spend hours and hours by deciding what to allow, what to restrict, whether to make a separate SQL query for some collections etc.

 

Other ways?

OData tries to solve a similar problem like Graph QL and it looks easier to implement on the server side, because it is more restrictive. But there are some issues too, and many people would tell you that it’s dead. The main issue can be the lack of good clients for non-.NET platforms, and you may run into similar issues like with Graph QL when you try to implement it on the server side.

 

Lot of ideas…

One of the things I like when I use DotVVM to build web apps is that I don’t have to build and maintain the API myself. The framework handles the server-client communication for me which is extremely helpful when the web app changes its UI and structure of the data frequently. Almost every such change would require changing the API, and if this API is used by anyone else, a mobile application for example, it creates an additional overhead with versioning of this API.

With DotVVM, I can just deploy a new version of the app with a different viewmodel and that’s it. If there is a mobile app, it has its own API, so changes of the web UI don’t require changes of the API. And provided the application is well structured, the API controller is a very tiny class that only calls methods from the business layer. And of course, the viewmodel calls similar or the same methods, so the business logic is shared by the mobile and web app.

 

If we decide to create a WebAssembly version of DotVVM, we should really focus on making the client-server communication simple. I don’t want to build my better Swagger, because it is a lot of work, but still - there must be an easier way.

I am really looking forward what new possibilities the WebAssembly unleashes. And I hope that new frameworks and tools will make things simpler, not more difficult. 

We have started using Microsoft Teams in my company. Despite the fact the product is not mature in some areas yet, I like it more than Slack we used before, mostly because the Office integrations - files, OneNote sheets in tabs in each channel are just a great idea. But that’s another story.

Recently I was on some hackathong and got an idea. Every day at 11 AM, we have the same discussion at work - it goes something like this:

“Hey guys, what about having a lunch?”

“OK, but who else is coming?”

“I don’t know, AB is not here yet, CD won’t come today…”

“OK, so ask AB if he is on the way.”

“And which restaurant we will go?”

“I don’t know, what do they have today at EF’s?”

So I have decided to build a chat bot which posts a message every day at 11 AM to a specific channel on Teams. It will grab the menu of several restaurants nearby and post it in the channel (sorry, the image is in Czech: it is a lunch menu of two restaurants near our office):

Lunch bot

 

And because I haven’t time to play with Azure Functions yet, I have decided to build the bot using the functions. Well, that wasn’t the only reason acutally. Since the bot doesn’t need to run all the time and basically, it is a function that needs to be executed every day at a specific time, Azure Functions is the right technology for this task. And it should be very cheap because I can use the consumption-based plan.

 

Bot Builder SDK and Azure Functions

Unfortunately, there is not many samples of using Bot Builder SDK with Azure Functions.

I have installed Visual Studio 2017 Preview and the Azure Function Tools for Visual Studio 2017 extension, so I can create a classic C# project and publish it as Azure Functions application.

 

To build a bot, you need to register your bot and generate the application ID and password. You can enable connectors for various chat providers, I have added the Microsoft Teams channel.

On the Settings page, do not specify the Messaging endpoint yet. Just generate your App ID and password.

 

Then, create an Azure Functions project in the Visual Studio.

image

 

You will need to install several NuGet packages:

  • Microsoft.Bot.Builder.Azure
  • Microsoft.Bot.Connector.Teams

 

Every Azure Function is a class with a static method called Run. First, we need to create a function which will accept messsages from the Bot Framework endpoints. 

[FunctionName("message")]
public static async Task<object> Run([HttpTrigger] HttpRequestMessage req, TraceWriter log)
{
    // Initialize the azure bot
    using (BotService.Initialize())
    {
        // Deserialize the incoming activity
        string jsonContent = await req.Content.ReadAsStringAsync();
        var activity = JsonConvert.DeserializeObject<Activity>(jsonContent);
        
        // authenticate incoming request and add activity.ServiceUrl to MicrosoftAppCredentials.TrustedHostNames
        // if request is authenticated
        if (!await BotService.Authenticator.TryAuthenticateAsync(req, new[] { activity }, CancellationToken.None))
        {
            return BotAuthenticator.GenerateUnauthorizedResponse(req);
        }
        
        if (activity != null)
        {
            // one of these will have an interface and process it
            switch (activity.GetActivityType())
            {
                case ActivityTypes.Message:
                    await Conversation.SendAsync(activity, () => new SystemCommandDialog()
                    {
                        Log = log
                    });
                    break;
                case ActivityTypes.ConversationUpdate:
                case ActivityTypes.ContactRelationUpdate:
                case ActivityTypes.Typing:
                case ActivityTypes.DeleteUserData:
                case ActivityTypes.Ping:
                default:
                    log.Error($"Unknown activity type ignored: {activity.GetActivityType()}");
                    break;
            }
        }
        return req.CreateResponse(HttpStatusCode.Accepted);
    }
}

The HttpTrigger attribute tells Azure Function to bind the method to a HTTP request sent to URL /api/messages. Bot Framework will send HTTP POST request to this method when someone writes a message to a bot or mentions it, when the conversation is updated and so on - you can see the types of activities in the switch block.

To be able to test the bot locally, you need to add the application ID and password in the local.settings.json file. You can read the values using Environment.GetEnvironmentVariable("MicrosoftAppId"):

    "MicrosoftAppId": "APP_ID_HERE",
    "MicrosoftAppPassword": "PASSWORD_HERE"

The SystemCommandDialog is a class which will handle the entire conversation with bot. I have created several configuration commands, so I can use /register and /unregister commands to enable the bot in a specific Teams channel. You can look at the examples to understand how the communication looks like.

 

Creating Teams Conversation

I have another function in my project which downloads the lunch menus and posts a message every day at 11 AM.

[FunctionName("AskEveryDay")]
public static async Task Run([TimerTrigger("0 0 11 * * 1-5")]TimerInfo myTimer, TraceWriter log)
{
    try
    {
        string message = ComposeDailyMessage(...);
        await SendMessageToChannel(message, log);
        log.Info($"Message sent!");
    }
    catch (Exception ex)
    {
        log.Error($"Error! {ex}", ex);
    }
}

private static async Task SendMessageToChannel(StringBuilder message, TraceWriter log)
{
    var channelData = new TeamsChannelData
    {
        Channel = new ChannelInfo(CHANNEL_ID_HERE),
        Team = new TeamInfo(TEAM_ID_HERE),
        Tenant = new TenantInfo(TENANT_ID_HERE)
    };

    var newMessage = Activity.CreateMessageActivity();
    newMessage.Type = ActivityTypes.Message;
    newMessage.Text = message.ToString();
    var conversationParams = new ConversationParameters(
        isGroup: true,
        bot: null,
        members: null,
        activity: (Activity)newMessage,
        channelData: channelData);

	// create connection
    var connector = new ConnectorClient(new Uri(subscription.ServiceUrl), 
	    Environment.GetEnvironmentVariable("MicrosoftAppId"), Environment.GetEnvironmentVariable("MicrosoftAppPassword"));
    MicrosoftAppCredentials.TrustServiceUrl(subscription.ServiceUrl, DateTime.MaxValue);
	
	// create a new conversation
    var result = await connector.Conversations.CreateConversationAsync(conversationParams);
    log.Info($"Activity {result.ActivityId} ({result.Id}) started.");
}

The function is called every work day at 11AM thanks to the TimerTrigger.

The conversation in Teams is created by the CreateConversationAsync method which needs the TeamsChannelData object. To specify the channel which should contain the message, you need channel ID, team ID and tenant ID.

There are several ways how to get these values - for example, you can create a special command to display these values. When you mention the bot in the Teams channel, you will be able to get these IDs and display them or write them to some log file.

public virtual async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> argument)
{
    var message = await argument;
    
    var channelId = message.ChannelData.channel.id;
    var teamId = message.ChannelData.team.id;
    var tenantId = message.ChannelData.tenant.id;
    
    await context.PostAsync($"Channel ID: {channelId}\r\n\r\nTeam ID: {teamId}\r\n\r\nTenant ID: {tenantId}");
    context.Wait(MessageReceivedAsync);
}

 

Publishing the App

You can right-click the project in the Visual Studio and choose Publish. The wizard will let you create a new Azure Functions application in your subscription.

 

After the application is published, you need to navigate to the Azure portal. First, add the Application Settings for MicrosoftAppId and MicrosoftAppPassword.

image

 

Then obtain the function URL and place it in the Messaging endpoint field in the Bot App you have registered at the beginning - it’s on the Settings page.

image

 

Sideloading the Bot App in Teams

To be able to use the bot in the Microsoft Teams, you need to create a manifest and sideload it into your team. You need to do it only once.

You need to create a manifest, add bot icons and create a ZIP archive which you will upload in the Teams app. The manifest can look like this:

{
    "$schema": "https://statics.teams.microsoft.com/sdk/v1.0/manifest/MicrosoftTeams.schema.json", 
    "manifestVersion": "1.0",
    "version": "1.0.0",
    "id": "MICROSOFT_APP_ID",
    "packageName": "UNIQUE_PACKAGE_NAME",
    "developer": {
        "name": "SOMETHING",
        "websiteUrl": "SOMETHING",
        "privacyUrl": "SOMETHING",
        "termsOfUseUrl": "SOMETHING"
    },
    "name": {
        "short": "BOT_NAME"
    },
    "description": {
        "short": "BOT_DESCRIPTION",
        "full": "BOT_DESCRIPTION"
    },
    "icons": {
        "outline": "icon20x20.png",
        "color": "icon96x96.png"
    },
    "accentColor": "#60A18E",
    "bots": [
        {
            "botId": "MICROSOFT_APP_ID",
            "scopes": [
                "team"
            ]
        }
    ],
    "permissions": [
        "identity",
        "messageTeamMembers"
    ]
}

And that’s it.

 

Debugging

There are some challenges I have run into.

You can run your functions app locally and use the Bot Emulator application for debugging. You can test general functionality, but you won’t be able to test features specific to Teams (the channel ID and stuff like this).

You can debug Azure Functions in production. You can find your Functions app in the Server Explorer window in Visual Studio and click Attach Debugger. It is not very fast, but it allows you to make sure everything works in production.

And finally, there is the Azure portal with streaming log service which is espacially useful.

The only thing to be careful about is that the Functions app doesn’t restart sometimes after you publish a new version of it. I needed to stop and start the app in the Azure portal to be 100% sure that the app is restarted.

I have been blogging since 2007 on a Czech website called DotNetPortal. I have written a lot of articles about .NET and web development and also many articles with personal opinions on various tech stuff.

Recently, I have started speaking at conferences abroad and met many people who started following me on Twitter. That’s why I have started tweeting in English, and now I’d like to write something occasionally, so I have deployed Mads Kristensen’s MiniBlog (which is a very nice project btw).

 

Who am I?

I am 29-years old guy from the Czech Republic. 6 years ago, I have started RIGANTI, a small software company which builds custom line of business apps for many customers from the Czech Republic, Great Britain and US. I have also been publishing articles, talking at many user groups and local conferences. I also teach programming and do in-house courses in various companies in the Czech Republic.

I got the MVP award 8 years ago and recently I became Microsoft Regional Director.

3 years ago I have started working on my biggest project - an open source MVVM framework called DotVVM. Soon enough, the project was too big to maintain for one person, so I involved other people from my company. Now, DotVVM is used by dozens of companies from the Czech Republic and other countries.

If I had a free time, I would have played the piano or golf, or finished my model train landscape. But that’s not going to happen anytime soon because I am super busy with so many interesting things at work.

 

The Goal

I will write about various .NET stuff I run into.

You can also follow me on Twitter. I don’t produce many tweets of my own, but I watch a lot of accounts and aggregate interesting stuff from the .NET and Microsoft world.