Edit

Share via


Get started with Windows ML

Important

The Windows ML APIs are currently experimental and not supported for use in a production environment. If you have an app trying out these APIs, then you should not publish it to the Microsoft Store.

This topic shows you how to install and use Windows ML to discover, download, and register execution providers (EPs) for use with the ONNX Runtime shipped with Windows ML. Windows ML handles the complexity of package management and hardware selection, automatically downloading the latest execution providers compatible with your device's hardware.

If you're not already familiar with the ONNX Runtime, we suggest reading the ONNX Runtime docs. In short, Windows ML provides a shared Windows-wide copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs).

Prerequisites

  • Windows 11 PC running version 24H2 (build 26100) or greater
  • Language-specific prerequisites seen below
  • .NET 6 or greater
  • Targeting a Windows 10-specific TFM like net6.0-windows10.0.19041.0 or greater

Step 1: Install or update the Windows App SDK

Windows ML is included in the framework-dependent Windows App SDK 1.8 Experimental 4 release.

See use the Windows App SDK in an existing project for how to add the Windows App SDK to your project, or if you're already using Windows App SDK, update your packages.

Step 2: Download and register EPs

The simplest way to get started is to let Windows ML automatically discover, download, and register the latest version of all compatible execution providers. Execution providers need to be registered with the ONNX Runtime inside of Windows ML before you can use them. And if they haven't been downloaded yet, they need to be downloaded first. Calling EnsureAndRegisterAllAsync() will do both of these in one step.

using Microsoft.ML.OnnxRuntime;
using Microsoft.Windows.AI.MachineLearning;

// First we create a new instance of EnvironmentCreationOptions
EnvironmentCreationOptions envOptions = new()
{
    logId = "WinMLDemo", // Use an ID of your own choice
    logLevel = OrtLoggingLevel.ORT_LOGGING_LEVEL_ERROR
};

// And then use that to create the ORT environment
using var ortEnv = OrtEnv.CreateInstanceWithOptions(ref envOptions);

// Get the default ExecutionProviderCatalog
var catalog = ExecutionProviderCatalog.GetDefault();

// Ensure and register all compatible execution providers with ONNX Runtime
// This downloads any necessary components and registers them
await catalog.EnsureAndRegisterAllAsync();

Tip

In production applications, wrap the EnsureAndRegisterAllAsync() call in a try-catch block to handle potential network or download failures gracefully.

Next steps

After registering execution providers, you're ready to use the ONNX Runtime APIs within Windows ML! You will want to...

  1. Select execution providers - Tell the runtime which execution providers you want to use
  2. Run model inference - Compile, load, and inference your model

See also