Recently, we got a question on our Gitter chat about building markup controls that use binding expressions.

Basically, the requirement was to have a breadcrumb navigation control that would be used this way:

<cc:Breadcrumb DataSource="{value: ListNavParent}" 
               ItemActiveBinding="{value: ItemActive}" 
               PublicUrlBinding="{value: PublicUrl}" 
               TitleBinding="{value: Title}" />

The first parameter is DataSource - a hierarchy of pages in the navigation (Section 1 > Subsection A > Page C). The remaining parameters are binding expressions that would be evaluated on every item in the data source and provide the URL and title of the item and the information whether the item is active.

DotVVM has several controls which work this way – ComboBox for example. There is also the DataSource property as well as ItemTextBinding, ItemValueBinding and so on.

However, it is not trivial to build a control like that. In this article, I’d like to suggest some ways how to do it.

Rule 1: Use DataContext wherever possible

Writing such control can be much simpler if you don’t try to deal with properties like DataSource and SomethingBinding. Since the control just renders the breadcrumb navigation (a bunch of links) and does nothing else, and the collection of links will probably be stored in the viewmodel anyway (you need to pass it in the DataSource property somehow), you can get rid of the DataSource property and pass the collection of links to the control via the DataContext property. Inside the control, all bindings will be evaluated just on the control’s DataContext.

And because the DataContext will be a collection of some objects of a known type, making the Breadcrumb control would be easy. If you want to use the control in a more generic way (for different kinds of objects), feel free to use and interface.

First, let’s create the object which would represent the individual link in the navigation:

public class BreadcrumbLink
    public string Title { get; set; }

    public string Url { get; set; }

    public bool IsActive { get; set; }

It would be nice if we could display also a home icon in the control – the control will need a URL for the home icon as well as the collection of links.

If you have multiple things you need to pass to the control, there are two options:

  • Pass one of the things (probably the main one) as the DataContext and create control properties for the other values.
  • <cc:Breadcrumb DataContext="{value: Links}" HomeUrl="/" />

  • Create a tiny viewmodel object that holds all information the control needs. It depends how “coupled” the values are to each other – if you need to aggregate various data from different places in the viewmodel, the first option would work better. If all the values are tied together and worked with from one place, this approach is better.

Let’s create a tiny viewmodel object for the entire control:

public class BreadcrumbViewModel

    public string HomeUrl { get; set; }

    public List<BreadcrumbLink> Links { get; set; }


Now we can define the control like this:

@viewModel RepeaterBindingSample.Controls.BreadcrumbViewModel, RepeaterBindingSample

<ol class="breadcrumb">
    <li><a href="{value: HomeUrl}"><span class="glyphicon glyphicon-home" /></a></li>
    <dot:Repeater DataSource="{value: Links}" RenderWrapperTag="false">
            <li class-active="{value: IsActive}">
                <a href="{value: Url}">{{value: Title}}</a>

In this case, the control doesn’t need to have a code-behind file as it won’t need any control properties. All we need to do is to prepare the BreadcrumbViewModel object in the page viewmodel (nesting of viewmodels is a very powerful thing btw) and pass it to the control. The @viewModel directive of the control enforces that the control can be used only when its DataContext is BreadcrumbViewModel. If you use the control on any other place, you’ll get an error – bindings in DotVVM are strongly-typed.

<cc:Breadcrumb DataContext="{value: Breadcrumb}" />

Since this control will probably be on all pages of the app, it can be used in the master page. In addition to that, the master page can require all pages to provide items for the breadcrumb navigation using an abstract method:

public abstract class MasterPageViewModel : DotvvmViewModelBase

    public BreadcrumbViewModel Breadcrumb { get; } = new BreadcrumbViewModel()
        HomeUrl = "/"

    public override Task Init()
        if (!Context.IsPostBack)
            Breadcrumb.Links = new List<BreadcrumbLink>(ProvideBreadcrumbLinks());

        return base.Init();

    protected abstract IEnumerable<BreadcrumbLink> ProvideBreadcrumbLinks();

The conclusion is – pass data to controls using DataContext and avoid creating code behind for the controls if you can.

But what if I really want to work with custom binding expressions?

Building of custom binding expressions is tricky, but very powerful. Before we start, let’s review how binding works in DotVVM.

The DotVVM controls define their properties using this awful boilerplate syntax:

public string HomeUrl
    get { return (string)GetValue(HomeUrlProperty); }
    set { SetValue(HomeUrlProperty, value); }
public static readonly DotvvmProperty HomeUrlProperty
    = DotvvmProperty.Register<string, Breadcrumb2>(c => c.HomeUrl, string.Empty);

Why is that? Because the property can contain either its value, or a data-binding expression (which is basically an object implementing the IBinding interface). DotVVM controls have a dictionary of property values-or-bindings, and offer the GetValue and SetValue methods that can work with the property values (including evaluation of the binding expression). The only important thing here is the static readonly field HomeUrlProperty which is a DotVVM property descriptor. It can specify the default value, it supports inheritance from parent controls (if you set DataContext property on some control, its value will propagate to the child controls - unless the child control sets DataContext to something else), and there are more features to that.

There are several types of bindings – the most frequent is the value and command bindings. But how to construct the binding object?

The easiest way is to have it constructed by the DotVVM – it means specify it somewhere in a DotHTML file and have it compiled by the DotVVM view engine. It happens when you declare the code-behind for your markup control:

@viewModel System.Object
@baseType RepeaterBindingSample.Controls.Breadcrumb2, RepeaterBindingSample

public class Breadcrumb2 : DotvvmMarkupControl

    public string HomeUrl
        get { return (string)GetValue(HomeUrlProperty); }
        set { SetValue(HomeUrlProperty, value); }
    public static readonly DotvvmProperty HomeUrlProperty
        = DotvvmProperty.Register<string, Breadcrumb2>(c => c.HomeUrl, string.Empty);


If you use the control on some page and look what’s in the HomeUrl property (e. g. using GetValueRaw(HomeUrlProperty)), you’ll get the constructed binding object.

There are several things inside the binding object – the bare minimum are these:

  • DataContextStack – this structure represents the hierarchy of types in the DataContext path – if the binding is in the root scope of the page, the stack has only PageViewModel; if it’s nested in a repeater, there will be PageViewModel and BreadcrumbLink types. Please note that these are only types, not the actual values. The binding must know of that type is _this, _parent and  _root.
  • LINQ expression that represents the binding itself. It is strongly-typed, it can be compiled to IL so it’s fast to evaluate, and it can be quite easily translated to JS because it’s an expression tree. The expression is basically a lambda with one parameter of object[] – this is a stack of DataContexts (not types but the actual values – they are passed to the binding so it can evaluate). The element 0 is _this, the element 1 is _parent, the last one is _root

There can be other things, like translated Knockout expression of the binding, original binding string and so on, but they are either optional or will be generated automatically when needed. If you want to construct your own binding, you need at least the DataContextStack and the expression.

To create a binding, you can use something like this:

In order to build a binding, you need an instance of BindingCompilationService, the expression and the DataContextStack.

You can obtain the BindingCompilationService from dependency injection – just request it in the control constructor.

The DataContextStack can be obtained by calling control.GetDataContextType().

An finally, you need the expression – you can define it yourself like this:

Expression<Func<object[], string>> expr = contexts => ((BreadcrumbViewModel)contexts[0]).HomeUrl;

Note that the expression is always Func<object[], ResultType>. The contexts variable is the array of data context – the first element (0) is _this. The code snippet above is equivalent to {value: _this.HomeUrl} or just {value: HomeUrl}. The cast to specific type is needed just because contexts is array of objects.

To build the expression and pass it for instance to the Literal control, you need something like this:

// build the binding for literal yourself
Expression<Func<object[], string>> expr = contexts => ((BreadcrumbViewModel)contexts[0]).HomeUrl;
var dataContextStack = this.GetDataContextType();
var binding = ValueBindingExpression.CreateBinding(bindingCompilationService, expr, dataContextStack);

// create the literal control and add it as a child
var literal = new Literal();
literal.SetBinding(Literal.TextProperty, binding);

So now you know how to work with bindings – if you’d like to create a control with a property that accepts a binding, or if you need to compose your own binding, that’s how this is done.

Now how would I create a control as described at the beginning?

If you really want to create a control which is used in the following way, there is some extra things that need to be done.

<cc:Breadcrumb DataSource="{value: ListNavParent}" ItemActiveBinding="{value: ItemActive}" PublicUrlBinding="{value: PublicUrl}" TitleBinding="{value: Title}" />

The DataSource property contains a binding that is evaulated in the same DataContextStack in which the control lives. If you use this control in the root scope of the page, its DataContext will be just PageViewModel.

But the remaining properties will actually be evaluated on the individual items in the DataSource collection. They need to have the DataContextStack equal to the (PageViewModel, BreadcrumbLink) pair. Also, the IntelliSense in Visual Studio will need to know that so it could offer you the property names.

That’s why these properties have to be decorated with DataContextChange attributes. These attributes can stack on top of each other and basically they tell how to change the DataContextStack in order to get the result.

The TitleBinding property will be declared like this:

[ControlPropertyTypeDataContextChange(order: 0, propertyName: nameof(DataSource))]
[CollectionElementDataContextChange(order: 1)]
public IValueBinding<string> TitleBinding
    get { return (IValueBinding<string>)GetValue(TitleBindingProperty); }
    set { SetValue(TitleBindingProperty, value); }
public static readonly DotvvmProperty TitleBindingProperty
    = DotvvmProperty.Register<IValueBinding<string>, Breadcrumb2>(nameof(TitleBinding));

The first attribute says “let’s take the type of the value assigned to the DataSource property”. The second says “let’s take that type; it should be a collection or an array, so let’s take the element type of this and use it as a second level in the DataContextStack”.

So, if you assign a collection of type List<BreadrumbLink> in the DataSource property, the TitleBinding property stack will be (PageViewModel, BreadcrumbLink).

Notice that the type of the property is IValueBinding<string> – this hints everyone that they shouldn’t fin the “actual value” in the property.

Also, if you use the binding generated in this property, make sure you use it in the correct DataContextStack.

How to use binding object in DotHTML?

Unfortunately, right now you can’t. I think it is possible to write an attached property that would do that (I will try it), but DotVVM doesn’t ship with any option to do that. 

However, it is possible to generate the controls from the C# code. Theoretically, you can combine markup controls and code-only approach of tampering with bindings, but sometimes it gets even more complicated, so I prefer using code-only control in these cases.

Let’s start by defining the Breadcrumb2 control and its properties:

public class Breadcrumb2 : ItemsControl 

    public string HomeUrl
        get { return (string)GetValue(HomeUrlProperty); }
        set { SetValue(HomeUrlProperty, value); }
    public static readonly DotvvmProperty HomeUrlProperty
        = DotvvmProperty.Register<string, Breadcrumb2>(c => c.HomeUrl, string.Empty);
    [ControlPropertyTypeDataContextChange(order: 0, propertyName: nameof(DataSource))]
    [CollectionElementDataContextChange(order: 1)]
    public IValueBinding ItemActiveBinding
        get { return (IValueBinding)GetValue(ItemActiveBindingProperty); }
        set { SetValue(ItemActiveBindingProperty, value); }
    public static readonly DotvvmProperty ItemActiveBindingProperty
        = DotvvmProperty.Register<IValueBinding, Breadcrumb2>(nameof(ItemActiveBinding));

    [ControlPropertyTypeDataContextChange(order: 0, propertyName: nameof(DataSource))]
    [CollectionElementDataContextChange(order: 1)]
    public IValueBinding PublicUrlBinding
        get { return (IValueBinding)GetValue(PublicUrlBindingProperty); }
        set { SetValue(PublicUrlBindingProperty, value); }
    public static readonly DotvvmProperty PublicUrlBindingProperty
        = DotvvmProperty.Register<IValueBinding, Breadcrumb2>(nameof(PublicUrlBinding));

    [ControlPropertyTypeDataContextChange(order: 0, propertyName: nameof(DataSource))]
    [CollectionElementDataContextChange(order: 1)]
    public IValueBinding TitleBinding
        get { return (IValueBinding)GetValue(TitleBindingProperty); }
        set { SetValue(TitleBindingProperty, value); }
    public static readonly DotvvmProperty TitleBindingProperty
        = DotvvmProperty.Register<IValueBinding, Breadcrumb2>(nameof(TitleBinding));


The class inherits from ItemsControl, a handy base class that is common for all controls which have the DataSource property. Building your own DataSource property is not that easy if you want it to support all the nice things like the _index variable – you can look in the ItemsControl source code how it’s done.

Otherwise, the code snippet above is not complicated – it is just long. It declares the HomeUrl property as well as the ItemActiveBinding, PublicUrlBinding and TitleBinding with the DataContextChange attributes.

Now, let’s build the contents of the control. Since it doesn’t depend on the actual data passed to the control, we can do this in the Init phase – in general, the earlier the better.

protected override void OnInit(IDotvvmRequestContext context)
    var ol = new HtmlGenericControl("ol");
    ol.Attributes["class"] = "breadcrumb";


The BuildHomeItem method just builds the first piece of markup:

<li><a href="{value: HomeUrl}"><span class="glyphicon glyphicon-home" /></a></li>
private HtmlGenericControl BuildHomeItem()
    var homeLi = new HtmlGenericControl("li");
        var homeLink = new HtmlGenericControl("a");
            homeLink.Attributes["href"] = GetValueRaw(HomeUrlProperty);     // hard-coded value or binding

            var homeSpan = new HtmlGenericControl("span");
                homeSpan.Attributes["class"] = "glyphicon glyphicon-home";
    return homeLi;

Btw, I am used to nest the child control creation in the inner blocks – it helps me to orient in large hierarchies of the controls.

Creation of the Repeater involves implementation of the ItemTemplate class:

<dot:Repeater DataSource="{value: Links}" RenderWrapperTag="false">
        <li class-active="{value: IsActive}">
            <a href="{value: Url}">{{value: Title}}</a>

private Repeater BuildRepeater()
    var repeater = new Repeater();
    repeater.RenderWrapperTag = false;
    repeater.ItemTemplate = new BreadcrumbLinkTemplate(this);
    repeater.SetBinding(Repeater.DataSourceProperty, GetValueBinding(DataSourceProperty));

    return repeater;

internal class BreadcrumbLinkTemplate : ITemplate
    private Breadcrumb2 parent;

    public BreadcrumbLinkTemplate(Breadcrumb2 parent)
        this.parent = parent;

    public void BuildContent(IDotvvmRequestContext context, DotvvmControl container)
        var li = new HtmlGenericControl("li");
            li.CssClasses.AddBinding("active", parent.GetValueBinding(Breadcrumb2.ItemActiveBindingProperty));

            var link = new HtmlGenericControl("a");
                link.Attributes["href"] = parent.GetValueBinding(Breadcrumb2.PublicUrlBindingProperty);
                link.SetBinding(HtmlGenericControl.InnerTextProperty, parent.GetValueBinding(Breadcrumb2.TitleBindingProperty));

And we’re done – you can find the complete code on GitHub.

As you can see, building the control the second way is much more complicated. Partly, it’s because you are trying to build a very universal control that is agnostic to the type of items passed to the DataSource collection, and accepts binding expressions as parameters. Thanks to the strongly-typed nature of DotVVM, you need to deal with bindings, data context stacks and changes, and all these things you wouldn’t have in the dynamic world. And finally, we didn’t have much time to make this simple – we focused on other areas than simplifying control development, although it is an important one too.

We had some ideas and we even have an experimental implementation, but we didn’t have the time to finalize the pull request and make it a real framework feature.

Recently, I got invited to speak at DotFest – a .NET developer conference in Novosibirsk. Due to the COVID pandemic, they had to make the event online, so the organizers set up a test call with me to make sure everything is working.

During the call, we’ve noticed that OBS (great video streaming software) on my laptop doesn’t work. We needed to grab my screen and send it to a RTSP server, but everything I’ve seen in the OBS was a black screen. The same situation occurred when I tried to add a webcam image to the scene – also a black screen.

I’ve never seen this before – I am using OBS for years and it was always working pretty well – it’s a very reliable piece of software. But I’ve never used OBS on my new laptop before – I always stream from a separate device using my awesome streaming setup.

OBS shows only black screen when using Display Capture or Video Device Capture

After googling a little bit, I’ve found this thread which described the problem and offered a working solution.

Since the thread is quite long, I’ve extracted the necessary steps including screenshots.

The problem is that my laptop has two graphics adapters – integrated Intel Iris Plus Graphics and also nVidia GE Force MX330. Windows has some automated algorithm to decide which adapter will be used by the app, and it chose the wrong one.

I had to go to Graphics Settings (in the Windows Settings app – just search for it in the Start menu) and create a preference for obs64.exe:

Creating a preference for a Desktop app

When I click Options, I need to select the integrated graphics (Power saving) card for display capture to work.

Select the power saving profile

After restarting OBS, everything works well.

According to the forum thread, for game streaming or NVENC, you should probably select the High performance profile.

Recently, we have been fighting with a weird issue that was happenning to one of our ASP.NET Core web apps hosted in Azure App Service.

The app was using ASP.NET Core Identity with cookie authentication. Customers reported to us that the application randomly signs them out.

What confused us was at first the fact that the app was being used mostly on mobile devices as PWA (Progressive Web App), and the iOS version was powered by Cordova. Since we have been fighting with several issues on Cordova, it was our main suspect – originally we thought that Cordova somehow deletes the cookies.

Cookies were all right

Of course, first we made sure that the application issues a correct cookie when you log in. We’ve been using default settings of ASP.NET Core Identity (14 days validity of the cookie and sliding expiration enabled).

We got most of the problem reports from the iOS users - that’s why we started suspecting Cordova. We googled many weird behaviors of cookies, some of them were connected with the new WKWebView component, but most of the articles and forum posts were caused by session cookies which normally get lost when you close the application. Our cookies were permanent with specified expiration of 14 days, so it wasn’t the issue.

It took us some time until we figured out that the issue is not present only in the Cordova app, but everywhere – later we tried to open the app from a browser on PC and it signed us out.

What was strange - the cookie was still there, it was before its expiration, I checked with Fiddler that it was actually sent to the server when I refreshed the page.

But the tokens…

Then I got an idea – maybe the cookie is still valid, but the token inside has expired. Close enough - it wasn’t the real cause, but it helped me find the real problem.

I tried to decode the authentication cookie to see whether there is some token expiration date or anything that could invalidate it after some time.

Thanks to this StackOverflow thread, I created a simple console app, loaded my token in it, and got the error message “The key {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} was not found in the key ring.

var services = new ServiceCollection();
var serviceProvider = services.BuildServiceProvider();

var cookie = WebUtility.UrlDecode("<the token from cookie>");

var dataProtectionProvider = serviceProvider.GetRequiredService<IDataProtectionProvider>();
var dataProtector = dataProtectionProvider.CreateProtector("Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationMiddleware", "Identity.Application", "v2");

var specialUtf8Encoding = new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true);
var protectedBytes = Base64UrlTextEncoder.Decode(cookie);
var plainBytes = dataProtector.Unprotect(protectedBytes);
var plainText = specialUtf8Encoding.GetString(plainBytes);

var ticketDataFormat = new TicketDataFormat(dataProtector);
var ticket = ticketDataFormat.Unprotect(cookie);

OK, it is a good thing to get this exception – of course I didn’t have the key on my PC so I wasn’t able to decode the cookie.

So I looked in the Kudu console of my Azure App Service (just go to This console offers many tools and there is an option to browse the filesystem of the web app.

Where are your data protection keys?

The tokens in authentication cookies are encrypted and signed using keys that are provided as part of the ASP.NET Core Data Protection API. There are a lot of options where you can store your keys.

We had the default configuration which stores the keys in the filesystem. When the app is in Azure App Service, the keys are stoted on the following path:


As the docs says, this folder is backed by network storage and is synchronized across all machines hosting the app. Even if the application is run in multiple instances, they all see this folder and can share the keys.

When I looked in that folder, no key with GUID from the error message was present. That’s the reason why the cookie was not accepted and the app redirected to the login page.

But why the key was not in that folder? I must have been there before, otherwise the app wouldn’t give me that cookie in the first place.

Deployment Slots

By default, the web app runs in another directory so there is no chance that the keys directory can be overwritten during the deployment.

But suddenly I saw the problem – we were using slot deployments. First, we would deploy the app in the staging slot, and after we made sure it is running, we swap the slots. And each slot has its own filesystem. When I opened Kudu for my staging app, the correct key was there.

Quick & Dirty Solution

Because we wanted to resolve the issue as fast as possible, I took the key from the staging slot and uploaded it to the production, and also copied the production key back to the staging slot. Both slots now had the same keys, so all authentication cookies issued by the app could be decoded properly.

When I refreshed the page in my browser, I was immediately signed in (the cookie was still there).

However, this is not a permanent solution – the keys expire every 90 days and are recreated, so you’d need to do the same thing again and again.

Correct Solution

The preferred solution should be storing the keys in Blob Storage and protecting them with Azure KeyVault. This service is designed for such purposes, and if you use the same storage and credentials for staging and production slot, it will work reliably.

By the way, a similar issue will probably occur in Docker containers (unless the keys are stored on some shared persistend volume). The filesystem in the container is ephemeral and any changes may be lost when the container is restarted or recreated.

So, if you are using deployment slots, make sure that both have access to the same data protection keys.

In the previous post, I described how to use Windows Packaging Project in Visual Studio to build a MSIX package manually.

I also mentioned the difference between running the application normally (when your WPF/WinForms app project is marked as a startup project) and running the application from a package (when the package project is the startup project).

For normal development, you’d ll be probably running the application normally, but it is useful to have the opportunity to test the package directly from the Visual Studio – there may be tiny differences between these two modes.

Continuous everything

Once your application is ready, you’ll want to distribute it to the end users. Commonly there is also a small group of “beta testers” or “early adpoters” who’ll want to get preview releases. You may also want to have internal builds.

Therefore, we can create several release channels:

  • Dev (not automated, built directly from Visual Studio)
  • Preview (built and published automatically for each commit in master)
  • Release (built automatically, published manually)

Since the Dev version will not go through the CI/CD process, let’s update the Package manifest to be the manifest for the Dev version.

  1. Set the Package Identity to “yourappuniquename-dev”.
  2. Add a “[DEV]” suffix to the Display Name so you can easily recognize the installed versions.

When you set the Package project as a startup project and run it, the application should install automatically in the Start menu.

Automated builds of MSIX packages

I am using Azure DevOps which is a natural choice for most .NET developers. However, if you prefer any other CI/CD solution, don’t worry – everything we’ll do can be automated using command line tools, and Azure DevOps only provides UI for these operations.

Creating the new Build pipeline

I have created a new Build pipeline using the new YAML pipelines. They don’t have such comfortable UI as the classic pipelines, but they store the entire pipeline together with the source code and can be versioned with full utilization of branches, which is super-important in larger projects because the structure of the solution evolves over time, there are always some new components and so on.

If you store your source codes within Azure Repos, it is the easiest way. If your repo is elsewhere (External Git, SVN), choose the appropriate option.

My sample project is on GitHub, so I am choosing it.

Creating the new pipeline

On the next step, I’ve selected the repository and authenticated with my GitHub account.

Now select .NET Desktop even if your app is in .NET Core – we’ll remove most of the code anyway:

Choosing starter template

Azure DevOps now opens the YAML editor you can use to define your build process.

YAML editor

Buidling the packages

Our process consists of 3 steps:

  • Udpdating the package manifest to match the correct channel. Since the manifest is a XML file, we can do this using PowerShell.
  • Building the solution (with some MSBuild parameters added so the package is produced)
  • Publishing the MSIX and related files as build artifacts

Since we want to have the Preview and Production consistent, we’ll build both within the same build from the same sommit. Thanks to this, when you make sure that the Preview version is working, you can publish the same source code as a stable version.

The YAML file starts with the following code:

- master

  vmImage: 'windows-latest'

  solution: '**/*.sln'
  buildPlatform: 'x86'
  buildConfiguration: 'Release'
  packageVersion: 1.0.0
  packageName: 'WUG Days Demo App'

- job: BuildMSIX
        channelName: 'preview'
        packageIdSuffix: '-preview'
        packageNameSuffix: ' [PREVIEW]'
        channelName: 'production'
        packageIdSuffix: ''
        packageNameSuffix: ''
  • The first section says that this pipeline is triggered on any commit made to the master branch.
  • The second section says that we’ll be using the latest version of the built-in Windows agents. These built-in agents come with Azure DevOps and you’ll probably have some free build minutes. If you’d like to use you own VMs, make sure they have the latest Visual Studio and Windows 10 SDK installed.
  • The third section defines variables for the entire build pipeline.
  • Since we want to repeat the same steps for two channels (Preview and Production), I’ve added the strategy element. It defines additional three variables (channelName, packageIdSuffix and packageNameSuffix) for the two runs (Preview and Production).
  - task: [email protected]
      targetType: 'inline'
      script: |
        [xml]$manifest= get-content ".\src\WpfCoreDemo.Package\Package.appxmanifest"
        $manifest.Package.Identity.Version = "$(packageVersion).$(Build.BuildId)"
        $manifest.Package.Identity.Name = "demo-864d9095-955f-4d3c-adb0-6574a5acb88b$(packageIdSuffix)"
        $manifest.Package.Properties.DisplayName = "$(packageName)$(packageNameSuffix)"
        $manifest.Package.Applications.Application.VisualElements.DisplayName = "$(packageName)$(packageNameSuffix)"

  - task: [email protected]
      solution: '$(solution)'
      msbuildArgs: '/restore /p:AppInstallerUri=$(msixInstallUrl)/$(channelName) /p:AppxPackageDir="$(Build.ArtifactStagingDirectory)/$(channelName)" /p:UapAppxPackageBuildMode=SideLoadOnly /p:GenerateAppInstallerFile=true'
      platform: '$(buildPlatform)'
      configuration: '$(buildConfiguration)'

  - task: [email protected]
      PathtoPublish: '$(Build.ArtifactStagingDirectory)'
      ArtifactName: 'drop'
      publishLocation: 'Container'

In the rest of the file, we define the build tasks:

  • The first is a PowerShell script that opens the manifest file as XML and updates the package identity (version, name) and the display name (it’s there twice). The version number is composed from the static packageVersion variable (1.0.0) with added Build.BuildId built-in variable that is a numeric sequence representing the number of builds.
  • The second task is the MSBuild with a few parameters:
    • /restore says that we want to do NuGet restore during the build
    • /p:AppInstallerUri specifies the URL where the MSIX package will be published
    • /p:AppxPackageDir is the path where we want to have the package outputs (the default is projectDir/AppPackages) – I am putting in in the staging directory for the artifacts task
    • /p:UapAppxPackageBuildMode=SideLoadOnly means that the package won’t go the the Windows Store and will be side-loaded
    • /p:GenerateAppInstallerFile tells MSBuild to generate the .appinstaller file – it is a simple file that defines the latest version of the package and can be used to check whether theare are new versions of the app
  • The third task just takes the artifacts staging directory and publishes it as a result of the build.

Build progress

When the build finishes, your packages will be published in the drop artifact:

Published build artifacts

You can see that there are two folders in the artifact, and each holds the index.html page, the app installer file and a versioned folder with the MSIX package itself.


That’s for building the packages. In the next post, I’ll show how to release the packages.

For a few last years, my company was building mostly web applications. The demand for desktop applications was very low, and even though we had some use cases in which building a desktop app would be less costly, our customers preferred the web solutions. The uncertain future of WPF, together with low interest in UWP, indicated that the web is practically the only way to go.

The situation changed a bit when Microsoft announced that .NET Core 3.0 would support WPF and WinForms. At about the same time, we got a customer who wanted us to build a large custom point-of-sale solution, and from all the choices we had, WPF sounded like the most viable option. There were many desktop-specific requirements in the project, for example printing different kinds of documents (invoices, sales receipts) using different printers, an integration with credit card terminals, and more.

Deployment of desktop apps

Together with WPF and WinForms on .NET Core, Microsoft also started pushing a new technology of deployment desktop apps: MSIX.

It is conceptually similar to ClickOnce (simple one-click installation process, automatic updates, and more), but it is very flexible. The most severe pain we had with ClickOnce was when we used it for large and complicated apps. We were hitting many issues and obstacles during the installation and upgrade process of the apps. The users were forced to uninstall the app and re-install it again. Sometimes, they had to delete some folder on the disk or clean something in the registry to make ClickOnce install the app.

MSIX should be more reliable as it was designed with respect to a wide range of Windows applications. It is a universal application packaging format that can be used for classic Win32 apps as well as for .NET and the new UWP applications.

It is also secure as the app installed from a package runs in a container - it cannot change system configuration, and all writes in the system folders or registry are virtualized. When you uninstall the app, no garbage should remain in your system.

You can choose to distribute the app packages manually, or you can use Windows Store for that. The nice thing is that you can avoid Windows Store entirely and use any way you want to distribute the MSIX. The most natural way is to publish the package on a network share or on some internal web server so they can be accessed using HTTPS.

Every step in the package building process can be automated using command-line tools, which allows us to embrace DevOps practices we got used to from the web world.

Our WPF project started when .NET Core 3.0 was in an early preview, but we have decided to try both WPF on .NET Core and the new MSIX deployment model.

The Simplest Scenario: Building a package to the WPF app

In my sample project, I have a simple WPF app that uses .NET Core 3.0.

The first step is to add a Windows Application Packaging Project in the solution:

Adding a Windows Application Packaging Project

The Windows Package project contains a manifest file. It is an XML file, but when you open it in Visual Studio, there is an editor for it.

Application Manifest editor

The most important fields are Display Name on the first tab, and Package Name and Version on the Packaging tab.

There are also several image files with the app icon, Windows Store icon, splash screen image, and so on.

Don't forget to make sure the WPF app is referenced in the Applications folder of the Windows Package project.

Application referenced in the Packaging project

Now, when you set the WPF project as a startup project and run it, the application will run in a classic, non-package mode. It's the same as it always worked in Windows. The app can do anything that your user has permissions.

However, when you set the Package project as a startup project and run it, the application will run from the package.

There is a NuGet package called Microsoft.Windows.SDK.Contracts that contains the Package class - you can use Package.Current to access the information about the application package - the name, version, identity, and so on. There is even an API to check or download the updates of the package.

public static PackageInfo GetPackageInfo()
        return new PackageInfo()
            IsPackaged = true,
            Version = Package.Current.Id.Version.Major + "." + Package.Current.Id.Version.Minor + "." + Package.Current.Id.Version.Build + "." + Package.Current.Id.Version.Revision,
            Name = Package.Current.DisplayName,
            AppInstallerUri = Package.Current.GetAppInstallerInfo()?.Uri.ToString()
    catch (InvalidOperationException)
        // the app is not running from the package, return and empty info
        return new PackageInfo();

When you right-click the Package project and choose Publish > Create App Packages, there is a wizard that helps you with building the MSIX package.

Create App Packages



Update the installer location to either a UNC path, or to a web URL.


The process also creates a simple web page with information about the app and a button to install it. Aside from the MSIX package, there is also an App Installer file holding information about the app, its latest version, and a path to its MSIX package.

You can copy these files on a network share, or publish them at the URL you specified in the wizard.

Package with web page and app installer file

Installation page

When you change something in the project, you can publish the package again and upload the files on a web server or to the UNC share.

The application will check for the updates when it is started, and when you launch it next time, the update will install automatically.


I've just got through the most straightforward scenario for MSIX. In larger projects, you will need to have multiple release channels (preview and stable), you will need to sign your packages with a trusted certificate, and you will want to build the packages automatically in the DevOps pipeline.

I'll focus on all these topics in the next parts of this series. Stay tuned.