In the previous blog post, I was writing about the problems of legacy ASP.NET Web Forms applications. It is often not possible to rewrite them from scratch, as new features should be introduced every month, and I’d like to dig into one of the ways of modernizing these applications.

Parts of the series

In this part, I will write about how to replace Forms Authentication with OWIN Security middlewares. I needed to do this in one of my older Web Forms apps recently, because it was using Microsoft.WebPages.OAuth library to authenticate users using Facebook, Twitter and Google. This library was part of ASP.NET Web Pages and is very old, so it doesn’t support the current version of OAuth protocol.

OWIN, which was a new infrastructure for ASP.NET applications on .NET Framework, a predecessor of ASP.NET Core, supports various authentication middlewares, and besides Cookie Security Middleware which can replace Forms Authentication, there are also middlewares for all commonly used social networks and other authentication scenarios.


Forms Authentication

I have created a sample ASP.NET Web Forms application (it’s in ModernizingWebFormsSample.Auth folder) with a few pages and Forms Authentication.

It is using the built-in membership provider to store information about users. The database should be created automatically and it should reside in App_Data folder.

In Global.asax, there is a code which creates a default user account (e-mail: [email protected], password: Pa$$w0rd) when the application is launched. 

The Login.aspx page contains a login form made by the <asp:Login> control. This control uses the membership provider to validate the user credentials (by calling Membership.ValidateUser) and then it calls FormsAuthentication.SetAuthCookie to issue a cookie with an authentication token.

If you don’t want to use <asp:Login> control, you can always build the form by yourself and issue the authentication cookie manually by calling the methods mentioned above.


The concepts

If you are not familiar with how membership providers and identity in ASP.NET works, here is a brief explanation.

Almost every web application uses some concept of user accounts, often with different permissions and access levels.

The information about the users needs to be stored somewhere. Sometimes, the users are stored in a SQL database, sometimes, they are retrieved from Active Directory, and there are many other options.

Membership provider is an abstract mechanism that provides a unified API for the application no matter how and where exactly the user info is stored. The membership provider exposes methods for creating, editing and deleting users, validating, changing and resetting their passwords, and so on.

ASP.NET comes with a built-in membership provider, which stores user information in a SQL database which is located in App_Data folder (it can be changed in web.config).

You can implement your own membership provider to be able to store users in your own SQL database, or any other place. You just need to implement several methods like CreateUser, ValidateUser etc.

All ASP.NET controls that work with users (<asp:Login>, <asp:ChangePassword> etc.) work thanks to the concept of membership providers. If you haven’t heard about membership providers before, you are probably using the default one.

You should consider replacing membership providers by ASP.NET Identity, which provides much more options and features, but that’s another story - I will address it in some of the next parts of this series.


The authentication in ASP.NET can work in multiple modes. Most Web Forms applications are using Forms Authentication, which uses an authentication cookie. You can set the authentication mode in web.config, using the system.web/authentication element.


The authentication and membership providers are decoupled, so you should be able to use any authentication mode with any membership provider (or other solution, like ASP.NET Identity I have already mentioned).


OWIN

ASP.NET was always tightly integrated with IIS. It was quite difficult to run .NET web apps outside IIS. The reason for OWIN was to provide a modular and extensible HTTP server pipeline that would be able to run without IIS dependencies. This allows e.g. self-hosting ASP.NET Web API or SignalR applications in Windows Service and other interesting scenarios.

OWIN works with a concept of middlewares. You can register any number of middlewares in the pipeline. Basically, middlewares are just functions with two parameters – request context and next middleware. The middleware can look at the request context and process the HTTP request, or pass the request to another middleware (and optionally do something with the response produced by the next middleware).

This mechanism is very flexible and allows to combine multiple frameworks in one application. The authentication also benefits from this concept and can be totally independent on the application itself.

For example, you can register Web API in the OWIN pipeline. When the request URL matches some API controller, Web API will process the request and produce the response. If Web API doesn’t recognize the URL, it will pass the request to the next middleware in the pipeline, which can be e. g. static files middleware. It will look in the filesystem and return the appropriate file, or again, pass the control to the next middleware in the pipeline. If no middleware wants to process the request, a HTTP error will be produced by OWIN.


OWIN Security is a set of middlewares which handle the authentication. These middlewares are typically registered at the beginning of the pipeline.

When a HTTP request is received, the authentication middleware will make sure the request is authenticated (by looking at cookies, the Authorization header or something like that), set the user identity in the request context and pass the request to the next middleware.

The application will be able to read the information about the user and doesn’t need to know whether the user was logged through an authentication cookie or through Windows Authentication.

If the application notices that the user doesn’t have permissions to perform some operation, it may throw an exception, or set the HTTP status code to 401. The authentication middleware will notice that and can take an appropriate action, for example redirect the user to the login page.

OWIN can be integrated with ASP.NET quite easily thanks to the Microsoft.Owin.Host.SystemWeb package. If you install it in the application, the OWIN pipeline will be added before the ASP.NET stack, so you can use authentication middlewares in combination with Web Forms pages (or MVC controllers). And that’s exactly what I needed in my app recently to be able to use OWIN Security libraries, like Cookies, Facebook, Twitter and so on.


In the demo app, I am using Microsoft.Owin.Security.Cookies package as a replacement of Forms Authentication, because it works in a very similar way – it generates and stores the authentication ticket in a cookie.


1. Install Microsoft.Owin.Host.SystemWeb package in the application.

2. Then, install Microsoft.Owin.Security.Cookies package.

3. Add new OWIN Startup class in the project. This class is used to configure OWIN pipeline.

image


4. Add the following code snippet in the Configuration method in Startup.cs file. It will register the cookie authentication middleware in the OWIN pipeline.

If you need to adjust the default values like the expiration timeout or cookie domain, you can do it right here in the CookieAuthenticationOptions initializer.

app.UseCookieAuthentication(new CookieAuthenticationOptions()
{
    LoginPath = new PathString("/Login.aspx")
});


5. To issue an authentication cookie, call Context.GetOwinContext().Authentication.SignIn instead of FormsAuthentication.SetAuthCookie.

You will need to build a ClaimsIdentity which represents the current user. This identity contains a collection of claims – e.g. username, user ID, list of roles, e-mail and other information about the user. You can add any custom claims in the identity – they will be encrypted and stored in the authentication cookie.

If you are already using ASP.NET Identity, there is a method called CreateIdentity, which will build the ClaimsIdentity for you.

protected void LoginButton_OnClick(object sender, EventArgs e)
{
    if (Membership.ValidateUser(UserName.Text, Password.Text))
    {
        // get user info
        var user = Membership.GetUser(UserName.Text);

        // build a list of claims
        var claims = new List<Claim>();
        claims.Add(new Claim(ClaimTypes.Name, user.UserName));
        claims.Add(new Claim(ClaimTypes.NameIdentifier, user.ProviderUserKey.ToString()));
        if (Roles.Enabled)
        {
            foreach (var role in Roles.GetRolesForUser(user.UserName))
            {
                claims.Add(new Claim(ClaimTypes.Role, role));
            }
        }

        // create the identity
        var identity = new ClaimsIdentity(claims, CookieAuthenticationDefaults.AuthenticationType);

        Context.GetOwinContext().Authentication.SignIn(new AuthenticationProperties()
        {
            IsPersistent = RememberMe.Checked
        }, 
        identity);

        Response.Redirect("~/");
    }
    else
    {
        FailureText.Text = "The credentials are not valid!";
    }
}


To sign out, you need to call SignOut method:

protected void LogoutButton_OnClick(object sender, EventArgs e)
{
    Context.GetOwinContext().Authentication.SignOut();
    Response.Redirect("~/");
}


6. If you are using ASP.NET login controls, you may need to replace them with your own mechanisms as the controls are hard-coded to use the Forms Authentication. Unfortunately, there are no extensibility points to replace static method calls to FormsAuthentication.SetAuthCookie.

This applies to the following controls:


If you are using these controls without specifying your own templates for them, you may use the following quick action to generate their code.

image

You can now get rid of the Login control and use only the code inside the template. Just add OnClick to the button and use the code from step 5 to perform the authentication.


7. The last thing you need to do is to disable Forms authentication in web.config. Change the authentication mode to None. Don’t worry, there is still the OWIN authentication.

<authentication mode="None" />


Note that the URL authorization mechanism (system.web/authorization element) will still work. OWIN still sets the ClaimIdentity in the HttpContext.Current.User property, so the URL authorization module will be able to see whether the user is authenticated and in which roles it is.


You can see the working demo on GitHub in the ModernizingWebFormsSample.Auth.New project.

Long time ago, I have learned that underestimating the architecture of a software project is much worse than over-engineering it. When the project evolves, there are always some new requirements which completely break some of the initial assumptions. Any robustness that can help with addressing larger changes can be very helpful and can save a lot of time and sleepless nights.

Even the tiniest, one-purpose scripts which generates some report can grow and grow, and suddenly it becomes a large system that runs a department of a global corporation. I have seen that so many times, and being lazy at the beginning of the project often means great issues in the future.

But you know, we have a deadline…


Years ago, I tried to design a shared infrastructure for projects we have been doing in my company. Our typical project was a web-based line of business application, with lots of CRUDs, but there were always a lot of custom complex workflows and integrations.

When you look at almost any demo of ASP.NET application with Entity Framework, you can see that they are accessing the DbContext right in the MVC controllers, and the EF entities are used even in the views (where they can do lazy loading and all sort of nasty things).

This approach works great for a simple demo, and it is actually great for the beginners as it doesn’t get too complicated. But it is a nightmare in any large-scale application, as we saw in many apps written by somebody else which we had to maintain later. It leads to a lot of performance issues (lazy loading everywhere), repetitive code snippets (copy pasting, find & replace programming) and inconsistencies caused by the fact you can access any table from any page in the application.


We have always strictly separated “data access layer” which contained Entity Framework stuff, a “busines layer” with a domain model that was mapped from the EF entities, and a “presentation layer” on the top.

The business layer was always the most fascinating one for me – there are always a lot of things inside, and everyone is doing it quite differently.


Repositories and Query Objects

For some people, repositories are anti-pattern. For some people, Entity Framework DbSets are your repositories and there is no need to implement another useless layer on top of them.

However, I have found this layer very useful, so we are used to implement a repository over DbSet (or any other database table or collection), for a dozen of reasons. This extra layer gives us a great flexibility when we need to implement features like soft delete or entity versioning, and helps greatly in integration tests as it allows nice mocking.

Unlike a lot of people, we don’t have the all-purpose GetAll() method that would return IQueryable in the repos. Our repositories can only Insert, Update, Delete and GetById. There are no methods which would return multiple records (other than returning a few entities by ID).

To query for multiple records, we use Query Objects. They are great to encapsulate complex queries and ensure that the queries are not repeated across the entire application.

If you allow to access IQueryable, you will soon have expressions like productsRepo.GetAll().Where(p => p.CategoryId == category) spawned everywhere in the project. If you want to add soft delete checks later, it is very difficult to find all usages of this table.

Thanks to extracting this logic in a query object, we can have a class named ProductsByCategoryQuery in the project, which can be used on all places in the application. If we hit into any performance problems with Entity Framework (which happens in case of complex queries), we can re-implement the query object to use raw SQL while keeping the rest of the application unchanged, and so on.

Also, the query objects give us an unified way to do sorting and paging, which helps with building generic CRUD mechanisms.


Facades and Services

The business logic of the application is implemented in facades and service classes. I use the term “facade” for classes which are used directly from the presentation layer – facades expose all functions the UI needs.
All other classes with some business logic are referred as services, and they can be grouped in quite a complex hierarchies, depend on each other etc.

These facades and services use repos and queries for implementing everything the business layer does.


Model

In our case, the domain model of the application is typically a bunch of DTO (data transfer objects). Unlike most of the books says, I like the anemic model approach much more. I find very impractical to inject dependencies into domain objects.

Because most of the domain model objects are very similar to EF entities, we have adopted AutoMapper. It is a really great tool and I like its ProjectTo() method that can translate the mappings to IQueryable.

There are some caveats however. You should avoid nesting DTOs to prevent the queries to be super-complicated. We have also found out that it is a very good idea to have multiple DTOs for every entity. We have list DTOs which are used in grids and lists, we have detail DTOs for the edit forms, we have basic DTOs for filling the comboboxes and lot more. Thanks to AutoMapper, most of our mappings are just CreateMap<TEntity, TDTO>() with no additional configuration.


…and it’s Open Source

I have been showing the way how a business layer can be designed in many of my conference and user group talks. Our Riganti Utils Infrastructure library is developed on GitHub and anyone is welcome to use it, or to take inspiration from it.

The current version of the infrastructure is 2 and currently, we are in process of designing its third version.


In the next part of this article, I will try to address the issues with v2 and how to make the infrastructure easier to use.

Recently, I have been working on OpenEvents – an open source event registration system. 

Because I am going to have several talks about microservices architecture in upcoming months, I have chosen different approach to build this app so it can work as a demo.

I have separated the event management part and the registration part in two services and used Azure Service Bus to do messaging between these two parts.


Which package?

There are two Nuget packages which you can use:


Both packages contain the queue or topic client classes and all API you need to send, receive and work with the messages.

The old package also contained the NamespaceManager class which could be used to manage topics, subscriptions and queues inside the Service Bus namespace.

It was very useful because the application could provision these resources at startup and didn’t require to prepare all the queues and topics manually. Especially if you had multiple environments (dev, test, staging, production), it is easier to let the app create what is needs than to maintain four environments.


Resource Manager

The new package doesn’t include any API to manage these resources. The only way to create topics, subscriptions and queues programmatically is to use Azure Resource Manager.

Azure Resource Manager exposes a REST API and there is a Nuget package with API client for almost every Azure service. The package name always starts with Microsoft.Azure.Management, so we need to use Microsoft.Azure.Management.ServiceBus.

Because consuming the raw REST API is not so convenient, some Azure services also offer a package with Fluent suffix. These packages contain wrappers which make using the API more convenient.

In my app, I have used Microsoft.Azure.Management.ServiceBus.Fluent Nuget package.


First, I need to authenticate and create the ServiceBusManager object so we can work with the resources inside the Service Bus namespace.

var credentials = SdkContext.AzureCredentialsFactory.FromServicePrincipal(CLIENT_ID, CLIENT_SECRET, TENANT_ID, AzureEnvironment.AzureGlobalCloud);
var serviceBusManager = ServiceBusManager.Authenticate(credentials, SUBSCRIPTION_ID);
serviceBusNamespace = serviceBusManager.Namespaces.GetByResourceGroup(RESOURCE_GROUP, RESOURCE_NAME);


Creating the topic is then quite easy:

public async Task<ITopic> EnsureTopicExists(string topicName)
{
    var topics = await serviceBusNamespace.Topics.ListAsync();
    var topic = topics.FirstOrDefault(t => t.Name == topicName);

    if (topic == null)
    {
        topic = await serviceBusNamespace.Topics.Define(topicName)
            .WithSizeInMB(1)
            ...      // other configuration
            .CreateAsync();
    }

    return topic;
}


You can use the same approach for queues, subscriptions and other resources.


Authentication

The last thing you need to handle is the authentication for the Resource Manager API. To do that, I have created a service principal and given it a permissions to manage the Service Bus namespace.

1. Install the Azure PowerShell.


2. Login to the Azure account:

Login-AzureRmAccount


3. If you have multiple subscriptions registered in your account, select the correct one. I am using the name of the subscription to identify it:

Select-AzureRmSubscription -SubscriptionName "SUBSCRIPTION_NAME"
Get-AzureRmSubscription -SubscriptionName "SUBSCRIPTION_NAME"


4. Note the TenantId and Id of the subscription from the output of the previous step – you will need it later.


5. Now, create the service principal. Make sure the password is strong enough and the name describes what is the purpose of the principal. The password of the principal will be the ClientSecret – do not lose it as you will need it later.

$p = ConvertTo-SecureString -asplaintext -string "PRINCIPAL_PASSWORD" -force
$sp = New-AzureRmADServicePrincipal -DisplayName "PRINCIPAL_NAME" -Password $p


6. Retrieve the ApplicationId of the principal. It is often called ClientId – you will need it later.

$sp.ApplicationId


7. Assign the Contributor role to the service principal for the Service Bus resource. Use the name of the resource group and exact name of the Service Bus.

New-AzureRmRoleAssignment -ServicePrincipalName $sp.ApplicationId -ResourceGroupName RESOURCE_GROUP -ResourceName RESOURCE_NAME -RoleDefinitionName Contributor -ResourceType Microsoft.ServiceBus/namespaces


No we should have all the information we need to make our code work.


Configuration

Keeping these information in the code is not a good idea. I am using the Microsoft.Extensions.Configuration to store the Service Bus configuration and secrets. I have created a class that represents my configuration:

public class ServiceBusConfiguration
{

    public string ConnectionString { get; set; }


    public string ResourceGroup { get; set; }

    public string NamespaceName { get; set; }


    public string ClientId { get; set; }

    public string ClientSecret { get; set; }

    public string SubscriptionId { get; set; }

    public string TenantId { get; set; }
}


I have added the following section in the appconfig.json file in my application:

{
  ...
  "serviceBus": {
    "resourceGroup": "RESOURCE_GROUP",
    "namespaceName": "RESOURCE_NAME",
    "connectionString": "",
    "clientId": "",
    "clientSecret": "",
    "subscriptionId": "",
    "tenantId": ""
  },
  ... 
}


To get the configuration object for Service Bus, I can use the following code:

var config = Configuration.GetSection("serviceBus").Get<ServiceBusConfiguration>();


Working with User Secrets

Since I don’t want to keep the secrets in the source control, I am using User Secrets.

I have added the following section at the top of my .csproj file…

<PropertyGroup>
  <UserSecretsId>openevents</UserSecretsId>
</PropertyGroup>


…and the following section at the bottom:

<ItemGroup>
  <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
</ItemGroup>


Now I can use command line to store the secret configuration values outside of my project folder.

dotnet user-secrets set serviceBus:connectionString "SERVICE_BUS_CONNECTION_STRING"
dotnet user-secrets set serviceBus:subscriptionId SUBSCRIPTION_ID
dotnet user-secrets set serviceBus:tenantId TENANT_ID
dotnet user-secrets set serviceBus:clientId CLIENT_ID
dotnet user-secrets set serviceBus:clientSecret CLIENT_SECRET


Remember that CLIENT_ID is the ApplicationId value you have printed out in step 6. The CLIENT_SECRET is the password of the service principal.


The last thing you need to do is to add user secrets in the configuration builder so the secrets will override the values from the appsettings.json file.

var builder = new ConfigurationBuilder()
    .SetBasePath(hostingEnvironment.ContentRootPath)
    .AddJsonFile("appsettings.json")
    .AddJsonFile($"appsettings.{hostingEnvironment.EnvironmentName}.json", optional: true)
    .AddEnvironmentVariables();

if (hostingEnvironment.IsDevelopment())
{
    builder.AddUserSecrets<Startup>();
}

Configuration = builder.Build();


By default, they are stored in the following location: c:\Users\USER_NAME\AppData\Roaming\Microsoft\UserSecrets

Remember that user secrets are designed for development purposes, they should not be used in production environment.

In my last live coding session, I have a web application built with DotVVM which doesn’t connect to a database directly. It consumes a REST API instead, which is quite common approach in microservices applications.

Application architecture

 

 

The validation logic should be definitely implemented in the backend service. The REST API can be invoked from other microservices and we certainly don’t want to duplicate the logic on multiple places. In addition to that, the validation rules might require some data which would not be available in the admin portal (uniqueness of the user’s e-mail address for example).

Basically, the backend service needs to report the validation errors to the admin portal and the admin portal needs to report the error in the browser so the text field can be highlighted.

 

The solution is quite easy as both DotVVM and ASP.NET MVC Core use the same model validation mechanisms.

 

Implementing the Validation Logic

In my case, the API controller accepts and returns the following object (some properties were stripped out). I have used the standard validation attributes (Required) for the simple validation rules and implemented the IValidatableObject interface to provide enhanced validation logic for dependencies between individual fields.

public class EventDTO : IValidatableObject
{
    public string Id { get; set; }

    [Required]
    public string Title { get; set; }
    public string Description { get; set; }
    public List<EventDateDTO> Dates { get; set; } = new List<EventDateDTO>();
    ... 
    public DateTime RegistrationBeginDate { get; set; }
    public DateTime RegistrationEndDate { get; set; }

    [Range(1, int.MaxValue)]
    public int MaxAttendeeCount { get; set; }

    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        if (!Dates.Any())
        {
            yield return new ValidationResult("The event must specify at least one date!");
        }

        if (RegistrationBeginDate >= RegistrationEndDate)
        {
            yield return new ValidationResult("The date must be greater than registration begin date!", new [] { nameof(RegistrationEndDate) });
        }

        for (var i = 0; i < Dates.Count; i++)
        {
            if (Dates[i].BeginDate < RegistrationEndDate)
            {
                yield return new ValidationResult("The date must be greater than registration end date!", new[] { nameof(Dates) + "[" + i + "]." + nameof(EventDateDTO.BeginDate) });
            }
            if (Dates[i].BeginDate >= Dates[i].EndDate)
            {
                yield return new ValidationResult("The date must be greater than begin date!", new[] { nameof(Dates) + "[" + i + "]." + nameof(EventDateDTO.EndDate) });
            }
        }
        ...
    }
}

Notice that when I validate the child objects, the property path is composed like this: Dates[0].BeginDate.

 

Reporting the validation errors on the API side

When the client posts an invalid object, I want the API to respond with HTTP 400 Bad Request and provide the list of errors. In ASP.NET MVC Core, there is the ModelState object which can be returned from the API. It is serialized as a map of property paths and lists of errors for the particular property.

I have implemented a simple action filter which verifies that the model is valid. If not, it produces the HTTP 400 response.

public class ModelValidationFilterAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext context)
    {
        if (!context.ModelState.IsValid)
        {
            context.Result = new BadRequestObjectResult(context.ModelState);
        }
        else
        {
            base.OnActionExecuting(context);
        }
    }
}

You can apply this attribute on a specific action, or as a global filter. If something is wrong, I am getting something like this from the API:

{
    "Title": [
        "The event title is required!"
    ],
    "RegistrationEndDate": [
        "The date must be greater than registration begin date!"
    ],
    "Dates[0].BeginDate": [
        "The date must be greater than registration end date!"
    ]
}

 

Consuming the API from the DotVVM app

I am using Swashbuckle.AspNetCore library to expose the Swagger metadata of my REST API In my DotVVM application. I am also using NSwag to generate the client proxy classes.

This tooling can save quite a lot of time and is very helpful in larger projects. If you make any change in your API and regenerate the Swagger client, you will get a compile errors on all places that needs to be adjusted, which is great.

On the other hand, I am still missing a lot of features in Swagger pipeline, for example support for generic types. Also, even that NSwag provides a lot of extensibility points, sometimes I had to use nasty hacks or regex matching to make it generate the things I really needed.

Swagger generates the ApiClient class with async methods corresponding to the individual controller actions. I can call these methods from my DotVVM viewmodels.

public async Task Save()
{
    await client.ApiEventsPostAsync(Item);
    Context.RedirectToRoute("EventList");
}

Now, whenever I get HTTP 400 from the API, I need to read the list of validation errors, fill the DotVVM ModelState with them and report them to the client.

 

Reporting the errors from DotVVM

I have created a class called ValidationHelper with a Call method. It accepts a delegate which is called. If it returns a SwaggerException with the status code of 400, we will handle the exception. Actually, there are two overloads - one for void methods, one for methods returning some result.

public static class ValidationHelper
{

    public static async Task Call(IDotvvmRequestContext context, Func<Task> apiCall)
    {
        try
        {
            await apiCall();
        }
        catch (SwaggerException ex) when (ex.StatusCode == "400")
        {
            HandleValidation(context, ex);
        }
    }

    public static async Task<T> Call<T>(IDotvvmRequestContext context, Func<Task<T>> apiCall)
    {
        try
        {
            return await apiCall();
        }
        catch (SwaggerException ex) when (ex.StatusCode == "400")
        {
            HandleValidation(context, ex);
            return default(T);
        }
    }

    ...
}

The most interesting is of course the HandleValidation method. It parses the response and puts the errors in the DotVVM model state:

private static void HandleValidation(IDotvvmRequestContext context, SwaggerException ex)
{
    var invalidProperties = JsonConvert.DeserializeObject<Dictionary<string, string[]>>(ex.Response);

    foreach (var property in invalidProperties)
    {
        foreach (var error in property.Value)
        {
            context.ModelState.Errors.Add(new ViewModelValidationError() { PropertyPath = ConvertPropertyName(property.Key), ErrorMessage = error });
        }
    }

    context.FailOnInvalidModelState();
}

And finally, there is one little caveat - DotVVM has a different format of property paths than ASP.NET MVC uses, because the viewmodel on the client side is wrapped in Knockout observables. Instead of using Dates[0].BeginDate, we need to use Dates()[0]().BeginDate, because all getters in Knockout are just function calls.

This hack won’t be necessary in the future versions of DotVVM as we plan to implement a mechanism that will detect and fix this automatically.

private static string ConvertPropertyName(string property)
{
    var sb = new StringBuilder();
    for (int i = 0; i < property.Length; i++)
    {
        if (property[i] == '.')
        {
            sb.Append("().");
        }
        else if (property[i] == '[')
        {
            sb.Append("()[");
        }
        else
        {
            sb.Append(property[i]);
        }
    }
    return sb.ToString();
}

 

Using the validation helper

There are two ways how to use the API we have just created. Either you can wrap your API call in the ValidationHelper.Call method. Alternatively, you can implement a DotVVM exception filter that will call the HandleException method directly.

public async Task Save()
{
    await ValidationHelper.Call(Context, async () =>
    {
        await client.ApiEventsPostAsync(Item);
    });

    Context.RedirectToRoute("EventList");
}

 

In the user interface, the validation looks the same as always - you mark an element with Validator.Value="{value: property}" and configure the behavior of validation on the element itself, or on any of its ancestors, for example Validation.InvalidCssClass="has-error" to decorate the elements with the has-error class when the property is not valid.

The important thing is that you need to set the save button's Validation.Target to the object you are passing to the API, so the property paths would lead to corresponding properties on the validation target.

 

You can look at the ErrorDetail page and its viewmodel in the OpenEvents project repository to see how the things work.

 

Next Steps

Currently, this solution validates only on the backend part, not in the DotVVM application and of course not in the browser. To make the client-side validation work, the DotVVM application would have to see the validation attributes on the EventDetailDTO object. Right now, the object is generated using Swagger, so it doesn’t have these attributes.

The NSwag generator could be configured not to generate the DTOs, so we would be able to have the DTOs in a class library project that would be referenced by both admin portal and backend app. This would allow to do the same validation in the admin portal and because DotVVM would see the validation attributes, it could enforce some validation rules in the browser.

 

My third live coding session is scheduled to Tuesday, January 2nd on 6:30 PM (Central Europe Time).

Last time, I have built a simple user interface for editing the events in my registration portal. This time, I will do a little bit of CSSing, implement the validation and show how Bootstrap for DotVVM can make your life easier.

 

Watch the live stream on Twitch.