Long time ago, I have learned that underestimating the architecture of a software project is much worse than over-engineering it. When the project evolves, there are always some new requirements which completely break some of the initial assumptions. Any robustness that can help with addressing larger changes can be very helpful and can save a lot of time and sleepless nights.

Even the tiniest, one-purpose scripts which generates some report can grow and grow, and suddenly it becomes a large system that runs a department of a global corporation. I have seen that so many times, and being lazy at the beginning of the project often means great issues in the future.

But you know, we have a deadline…


Years ago, I tried to design a shared infrastructure for projects we have been doing in my company. Our typical project was a web-based line of business application, with lots of CRUDs, but there were always a lot of custom complex workflows and integrations.

When you look at almost any demo of ASP.NET application with Entity Framework, you can see that they are accessing the DbContext right in the MVC controllers, and the EF entities are used even in the views (where they can do lazy loading and all sort of nasty things).

This approach works great for a simple demo, and it is actually great for the beginners as it doesn’t get too complicated. But it is a nightmare in any large-scale application, as we saw in many apps written by somebody else which we had to maintain later. It leads to a lot of performance issues (lazy loading everywhere), repetitive code snippets (copy pasting, find & replace programming) and inconsistencies caused by the fact you can access any table from any page in the application.


We have always strictly separated “data access layer” which contained Entity Framework stuff, a “busines layer” with a domain model that was mapped from the EF entities, and a “presentation layer” on the top.

The business layer was always the most fascinating one for me – there are always a lot of things inside, and everyone is doing it quite differently.


Repositories and Query Objects

For some people, repositories are anti-pattern. For some people, Entity Framework DbSets are your repositories and there is no need to implement another useless layer on top of them.

However, I have found this layer very useful, so we are used to implement a repository over DbSet (or any other database table or collection), for a dozen of reasons. This extra layer gives us a great flexibility when we need to implement features like soft delete or entity versioning, and helps greatly in integration tests as it allows nice mocking.

Unlike a lot of people, we don’t have the all-purpose GetAll() method that would return IQueryable in the repos. Our repositories can only Insert, Update, Delete and GetById. There are no methods which would return multiple records (other than returning a few entities by ID).

To query for multiple records, we use Query Objects. They are great to encapsulate complex queries and ensure that the queries are not repeated across the entire application.

If you allow to access IQueryable, you will soon have expressions like productsRepo.GetAll().Where(p => p.CategoryId == category) spawned everywhere in the project. If you want to add soft delete checks later, it is very difficult to find all usages of this table.

Thanks to extracting this logic in a query object, we can have a class named ProductsByCategoryQuery in the project, which can be used on all places in the application. If we hit into any performance problems with Entity Framework (which happens in case of complex queries), we can re-implement the query object to use raw SQL while keeping the rest of the application unchanged, and so on.

Also, the query objects give us an unified way to do sorting and paging, which helps with building generic CRUD mechanisms.


Facades and Services

The business logic of the application is implemented in facades and service classes. I use the term “facade” for classes which are used directly from the presentation layer – facades expose all functions the UI needs.
All other classes with some business logic are referred as services, and they can be grouped in quite a complex hierarchies, depend on each other etc.

These facades and services use repos and queries for implementing everything the business layer does.


Model

In our case, the domain model of the application is typically a bunch of DTO (data transfer objects). Unlike most of the books says, I like the anemic model approach much more. I find very impractical to inject dependencies into domain objects.

Because most of the domain model objects are very similar to EF entities, we have adopted AutoMapper. It is a really great tool and I like its ProjectTo() method that can translate the mappings to IQueryable.

There are some caveats however. You should avoid nesting DTOs to prevent the queries to be super-complicated. We have also found out that it is a very good idea to have multiple DTOs for every entity. We have list DTOs which are used in grids and lists, we have detail DTOs for the edit forms, we have basic DTOs for filling the comboboxes and lot more. Thanks to AutoMapper, most of our mappings are just CreateMap<TEntity, TDTO>() with no additional configuration.


…and it’s Open Source

I have been showing the way how a business layer can be designed in many of my conference and user group talks. Our Riganti Utils Infrastructure library is developed on GitHub and anyone is welcome to use it, or to take inspiration from it.

The current version of the infrastructure is 2 and currently, we are in process of designing its third version.


In the next part of this article, I will try to address the issues with v2 and how to make the infrastructure easier to use.

Recently, I have been working on OpenEvents – an open source event registration system. 

Because I am going to have several talks about microservices architecture in upcoming months, I have chosen different approach to build this app so it can work as a demo.

I have separated the event management part and the registration part in two services and used Azure Service Bus to do messaging between these two parts.


Which package?

There are two Nuget packages which you can use:


Both packages contain the queue or topic client classes and all API you need to send, receive and work with the messages.

The old package also contained the NamespaceManager class which could be used to manage topics, subscriptions and queues inside the Service Bus namespace.

It was very useful because the application could provision these resources at startup and didn’t require to prepare all the queues and topics manually. Especially if you had multiple environments (dev, test, staging, production), it is easier to let the app create what is needs than to maintain four environments.


Resource Manager

The new package doesn’t include any API to manage these resources. The only way to create topics, subscriptions and queues programmatically is to use Azure Resource Manager.

Azure Resource Manager exposes a REST API and there is a Nuget package with API client for almost every Azure service. The package name always starts with Microsoft.Azure.Management, so we need to use Microsoft.Azure.Management.ServiceBus.

Because consuming the raw REST API is not so convenient, some Azure services also offer a package with Fluent suffix. These packages contain wrappers which make using the API more convenient.

In my app, I have used Microsoft.Azure.Management.ServiceBus.Fluent Nuget package.


First, I need to authenticate and create the ServiceBusManager object so we can work with the resources inside the Service Bus namespace.

var credentials = SdkContext.AzureCredentialsFactory.FromServicePrincipal(CLIENT_ID, CLIENT_SECRET, TENANT_ID, AzureEnvironment.AzureGlobalCloud);
var serviceBusManager = ServiceBusManager.Authenticate(credentials, SUBSCRIPTION_ID);
serviceBusNamespace = serviceBusManager.Namespaces.GetByResourceGroup(RESOURCE_GROUP, RESOURCE_NAME);


Creating the topic is then quite easy:

public async Task<ITopic> EnsureTopicExists(string topicName)
{
    var topics = await serviceBusNamespace.Topics.ListAsync();
    var topic = topics.FirstOrDefault(t => t.Name == topicName);

    if (topic == null)
    {
        topic = await serviceBusNamespace.Topics.Define(topicName)
            .WithSizeInMB(1)
            ...      // other configuration
            .CreateAsync();
    }

    return topic;
}


You can use the same approach for queues, subscriptions and other resources.


Authentication

The last thing you need to handle is the authentication for the Resource Manager API. To do that, I have created a service principal and given it a permissions to manage the Service Bus namespace.

1. Install the Azure PowerShell.


2. Login to the Azure account:

Login-AzureRmAccount


3. If you have multiple subscriptions registered in your account, select the correct one. I am using the name of the subscription to identify it:

Select-AzureRmSubscription -SubscriptionName "SUBSCRIPTION_NAME"
Get-AzureRmSubscription -SubscriptionName "SUBSCRIPTION_NAME"


4. Note the TenantId and Id of the subscription from the output of the previous step – you will need it later.


5. Now, create the service principal. Make sure the password is strong enough and the name describes what is the purpose of the principal. The password of the principal will be the ClientSecret – do not lose it as you will need it later.

$p = ConvertTo-SecureString -asplaintext -string "PRINCIPAL_PASSWORD" -force
$sp = New-AzureRmADServicePrincipal -DisplayName "PRINCIPAL_NAME" -Password $p


6. Retrieve the ApplicationId of the principal. It is often called ClientId – you will need it later.

$sp.ApplicationId


7. Assign the Contributor role to the service principal for the Service Bus resource. Use the name of the resource group and exact name of the Service Bus.

New-AzureRmRoleAssignment -ServicePrincipalName $sp.ApplicationId -ResourceGroupName RESOURCE_GROUP -ResourceName RESOURCE_NAME -RoleDefinitionName Contributor -ResourceType Microsoft.ServiceBus/namespaces


No we should have all the information we need to make our code work.


Configuration

Keeping these information in the code is not a good idea. I am using the Microsoft.Extensions.Configuration to store the Service Bus configuration and secrets. I have created a class that represents my configuration:

public class ServiceBusConfiguration
{

    public string ConnectionString { get; set; }


    public string ResourceGroup { get; set; }

    public string NamespaceName { get; set; }


    public string ClientId { get; set; }

    public string ClientSecret { get; set; }

    public string SubscriptionId { get; set; }

    public string TenantId { get; set; }
}


I have added the following section in the appconfig.json file in my application:

{
  ...
  "serviceBus": {
    "resourceGroup": "RESOURCE_GROUP",
    "namespaceName": "RESOURCE_NAME",
    "connectionString": "",
    "clientId": "",
    "clientSecret": "",
    "subscriptionId": "",
    "tenantId": ""
  },
  ... 
}


To get the configuration object for Service Bus, I can use the following code:

var config = Configuration.GetSection("serviceBus").Get<ServiceBusConfiguration>();


Working with User Secrets

Since I don’t want to keep the secrets in the source control, I am using User Secrets.

I have added the following section at the top of my .csproj file…

<PropertyGroup>
  <UserSecretsId>openevents</UserSecretsId>
</PropertyGroup>


…and the following section at the bottom:

<ItemGroup>
  <DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
</ItemGroup>


Now I can use command line to store the secret configuration values outside of my project folder.

dotnet user-secrets set serviceBus:connectionString "SERVICE_BUS_CONNECTION_STRING"
dotnet user-secrets set serviceBus:subscriptionId SUBSCRIPTION_ID
dotnet user-secrets set serviceBus:tenantId TENANT_ID
dotnet user-secrets set serviceBus:clientId CLIENT_ID
dotnet user-secrets set serviceBus:clientSecret CLIENT_SECRET


Remember that CLIENT_ID is the ApplicationId value you have printed out in step 6. The CLIENT_SECRET is the password of the service principal.


The last thing you need to do is to add user secrets in the configuration builder so the secrets will override the values from the appsettings.json file.

var builder = new ConfigurationBuilder()
    .SetBasePath(hostingEnvironment.ContentRootPath)
    .AddJsonFile("appsettings.json")
    .AddJsonFile($"appsettings.{hostingEnvironment.EnvironmentName}.json", optional: true)
    .AddEnvironmentVariables();

if (hostingEnvironment.IsDevelopment())
{
    builder.AddUserSecrets<Startup>();
}

Configuration = builder.Build();


By default, they are stored in the following location: c:\Users\USER_NAME\AppData\Roaming\Microsoft\UserSecrets

Remember that user secrets are designed for development purposes, they should not be used in production environment.

In my last live coding session, I have a web application built with DotVVM which doesn’t connect to a database directly. It consumes a REST API instead, which is quite common approach in microservices applications.

Application architecture

 

 

The validation logic should be definitely implemented in the backend service. The REST API can be invoked from other microservices and we certainly don’t want to duplicate the logic on multiple places. In addition to that, the validation rules might require some data which would not be available in the admin portal (uniqueness of the user’s e-mail address for example).

Basically, the backend service needs to report the validation errors to the admin portal and the admin portal needs to report the error in the browser so the text field can be highlighted.

 

The solution is quite easy as both DotVVM and ASP.NET MVC Core use the same model validation mechanisms.

 

Implementing the Validation Logic

In my case, the API controller accepts and returns the following object (some properties were stripped out). I have used the standard validation attributes (Required) for the simple validation rules and implemented the IValidatableObject interface to provide enhanced validation logic for dependencies between individual fields.

public class EventDTO : IValidatableObject
{
    public string Id { get; set; }

    [Required]
    public string Title { get; set; }
    public string Description { get; set; }
    public List<EventDateDTO> Dates { get; set; } = new List<EventDateDTO>();
    ... 
    public DateTime RegistrationBeginDate { get; set; }
    public DateTime RegistrationEndDate { get; set; }

    [Range(1, int.MaxValue)]
    public int MaxAttendeeCount { get; set; }

    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        if (!Dates.Any())
        {
            yield return new ValidationResult("The event must specify at least one date!");
        }

        if (RegistrationBeginDate >= RegistrationEndDate)
        {
            yield return new ValidationResult("The date must be greater than registration begin date!", new [] { nameof(RegistrationEndDate) });
        }

        for (var i = 0; i < Dates.Count; i++)
        {
            if (Dates[i].BeginDate < RegistrationEndDate)
            {
                yield return new ValidationResult("The date must be greater than registration end date!", new[] { nameof(Dates) + "[" + i + "]." + nameof(EventDateDTO.BeginDate) });
            }
            if (Dates[i].BeginDate >= Dates[i].EndDate)
            {
                yield return new ValidationResult("The date must be greater than begin date!", new[] { nameof(Dates) + "[" + i + "]." + nameof(EventDateDTO.EndDate) });
            }
        }
        ...
    }
}

Notice that when I validate the child objects, the property path is composed like this: Dates[0].BeginDate.

 

Reporting the validation errors on the API side

When the client posts an invalid object, I want the API to respond with HTTP 400 Bad Request and provide the list of errors. In ASP.NET MVC Core, there is the ModelState object which can be returned from the API. It is serialized as a map of property paths and lists of errors for the particular property.

I have implemented a simple action filter which verifies that the model is valid. If not, it produces the HTTP 400 response.

public class ModelValidationFilterAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext context)
    {
        if (!context.ModelState.IsValid)
        {
            context.Result = new BadRequestObjectResult(context.ModelState);
        }
        else
        {
            base.OnActionExecuting(context);
        }
    }
}

You can apply this attribute on a specific action, or as a global filter. If something is wrong, I am getting something like this from the API:

{
    "Title": [
        "The event title is required!"
    ],
    "RegistrationEndDate": [
        "The date must be greater than registration begin date!"
    ],
    "Dates[0].BeginDate": [
        "The date must be greater than registration end date!"
    ]
}

 

Consuming the API from the DotVVM app

I am using Swashbuckle.AspNetCore library to expose the Swagger metadata of my REST API In my DotVVM application. I am also using NSwag to generate the client proxy classes.

This tooling can save quite a lot of time and is very helpful in larger projects. If you make any change in your API and regenerate the Swagger client, you will get a compile errors on all places that needs to be adjusted, which is great.

On the other hand, I am still missing a lot of features in Swagger pipeline, for example support for generic types. Also, even that NSwag provides a lot of extensibility points, sometimes I had to use nasty hacks or regex matching to make it generate the things I really needed.

Swagger generates the ApiClient class with async methods corresponding to the individual controller actions. I can call these methods from my DotVVM viewmodels.

public async Task Save()
{
    await client.ApiEventsPostAsync(Item);
    Context.RedirectToRoute("EventList");
}

Now, whenever I get HTTP 400 from the API, I need to read the list of validation errors, fill the DotVVM ModelState with them and report them to the client.

 

Reporting the errors from DotVVM

I have created a class called ValidationHelper with a Call method. It accepts a delegate which is called. If it returns a SwaggerException with the status code of 400, we will handle the exception. Actually, there are two overloads - one for void methods, one for methods returning some result.

public static class ValidationHelper
{

    public static async Task Call(IDotvvmRequestContext context, Func<Task> apiCall)
    {
        try
        {
            await apiCall();
        }
        catch (SwaggerException ex) when (ex.StatusCode == "400")
        {
            HandleValidation(context, ex);
        }
    }

    public static async Task<T> Call<T>(IDotvvmRequestContext context, Func<Task<T>> apiCall)
    {
        try
        {
            return await apiCall();
        }
        catch (SwaggerException ex) when (ex.StatusCode == "400")
        {
            HandleValidation(context, ex);
            return default(T);
        }
    }

    ...
}

The most interesting is of course the HandleValidation method. It parses the response and puts the errors in the DotVVM model state:

private static void HandleValidation(IDotvvmRequestContext context, SwaggerException ex)
{
    var invalidProperties = JsonConvert.DeserializeObject<Dictionary<string, string[]>>(ex.Response);

    foreach (var property in invalidProperties)
    {
        foreach (var error in property.Value)
        {
            context.ModelState.Errors.Add(new ViewModelValidationError() { PropertyPath = ConvertPropertyName(property.Key), ErrorMessage = error });
        }
    }

    context.FailOnInvalidModelState();
}

And finally, there is one little caveat - DotVVM has a different format of property paths than ASP.NET MVC uses, because the viewmodel on the client side is wrapped in Knockout observables. Instead of using Dates[0].BeginDate, we need to use Dates()[0]().BeginDate, because all getters in Knockout are just function calls.

This hack won’t be necessary in the future versions of DotVVM as we plan to implement a mechanism that will detect and fix this automatically.

private static string ConvertPropertyName(string property)
{
    var sb = new StringBuilder();
    for (int i = 0; i < property.Length; i++)
    {
        if (property[i] == '.')
        {
            sb.Append("().");
        }
        else if (property[i] == '[')
        {
            sb.Append("()[");
        }
        else
        {
            sb.Append(property[i]);
        }
    }
    return sb.ToString();
}

 

Using the validation helper

There are two ways how to use the API we have just created. Either you can wrap your API call in the ValidationHelper.Call method. Alternatively, you can implement a DotVVM exception filter that will call the HandleException method directly.

public async Task Save()
{
    await ValidationHelper.Call(Context, async () =>
    {
        await client.ApiEventsPostAsync(Item);
    });

    Context.RedirectToRoute("EventList");
}

 

In the user interface, the validation looks the same as always - you mark an element with Validator.Value="{value: property}" and configure the behavior of validation on the element itself, or on any of its ancestors, for example Validation.InvalidCssClass="has-error" to decorate the elements with the has-error class when the property is not valid.

The important thing is that you need to set the save button's Validation.Target to the object you are passing to the API, so the property paths would lead to corresponding properties on the validation target.

 

You can look at the ErrorDetail page and its viewmodel in the OpenEvents project repository to see how the things work.

 

Next Steps

Currently, this solution validates only on the backend part, not in the DotVVM application and of course not in the browser. To make the client-side validation work, the DotVVM application would have to see the validation attributes on the EventDetailDTO object. Right now, the object is generated using Swagger, so it doesn’t have these attributes.

The NSwag generator could be configured not to generate the DTOs, so we would be able to have the DTOs in a class library project that would be referenced by both admin portal and backend app. This would allow to do the same validation in the admin portal and because DotVVM would see the validation attributes, it could enforce some validation rules in the browser.

 

My third live coding session is scheduled to Tuesday, January 2nd on 6:30 PM (Central Europe Time).

Last time, I have built a simple user interface for editing the events in my registration portal. This time, I will do a little bit of CSSing, implement the validation and show how Bootstrap for DotVVM can make your life easier.

 

Watch the live stream on Twitch.

The recording from the second live stream is here:


Few days ago, I have done a live coding session - in the first part, I was building the core of event registration system.

I really enjoyed it, so here is the date for the next session: Tuesday, December 19 at 6:30 PM (Central Europe Time). 

This time, I will be building the core of the admin portal in DotVVM that will allow to manage individual events and reservations.