Edit

Share via


CREATE EXTERNAL MODEL (Transact-SQL)

Applies to: SQL Server 2025 (17.x) Preview

Creates an external model object that contains the location, authentication method, and purpose of an AI model inference endpoint.

Syntax

Transact-SQL syntax conventions

CREATE EXTERNAL MODEL external_model_object_name
[ AUTHORIZATION owner_name ]
WITH
  (   LOCATION = '<prefix>://<path>[:<port>]'
    , API_FORMAT = '<OpenAI, Azure OpenAI, etc>'
    , MODEL_TYPE = EMBEDDINGS
    , MODEL = 'text-embedding-model-name'
    [ , CREDENTIAL = <credential_name> ]
    [ , PARAMETERS = '{"valid":"JSON"}' ]
    [ , LOCAL_RUNTIME_PATH = 'path to the ONNX runtime files' ]
  );

Arguments

external_model_object_name

Specifies the user-defined name for the external model. The name must be unique within the database.

owner_name

Specifies the name of the user or role that owns the external model. If not specified, ownership is given to the current user. Depending on permissions and roles, explicit permission needs to be granted to users to use specific external models.

LOCATION

Provides the connectivity protocol and path to the AI model inference endpoint.

API_FORMAT

The API message format for the AI model inference endpoint provider.

Accepted values are:

  • Azure OpenAI
  • OpenAI
  • Ollama
  • ONNX Runtime

MODEL_TYPE

The type of model being accessed from the AI model inference endpoint location.

Accepted values are:

  • EMBEDDINGS

MODEL

The specific model hosted by the AI provider. For example, text-embedding-ada-002, text-embedding-3-large, or o3-mini.

CREDENTIAL

Indicate which DATABASE SCOPED CREDENTIAL object is used with the AI model inference endpoint. More on accepted credential types and naming rules can be found in sp_invoke_external_rest_endpoint, or in the Remarks section of this article.

PARAMETERS

A valid JSON string that contains runtime parameters to be appended to the AI model inference endpoint request message. For example:

'{ "dimensions": 1536 }'

LOCAL_RUNTIME_PATH

LOCAL_RUNTIME_PATH specifies the directory on the local SQL Server where the ONNX runtime executables are located.

Permissions

External model creation and altering

Requires ALTER ANY EXTERNAL MODEL or CREATE EXTERNAL MODEL database permission.

For example:

GRANT CREATE EXTERNAL MODEL TO [<PRINCIPAL>];

or

GRANT ALTER ANY EXTERNAL MODEL TO [<PRINCIPAL>];

External model grants

To use an external model in an AI function, a principal must be granted the ability to EXECUTE it.

For example:

GRANT EXECUTE ON EXTERNAL MODEL::MODEL_NAME TO [<PRINCIPAL>];
GO

Retry count

If the embeddings call encounters HTTP status codes indicating temporary issues, you can configure the request to automatically retry. To specify the number of retries, add the following JSON to the PARAMETERS on the EXTERNAL MODEL. The <number_of_retries> should be a whole number between zero (0) and ten (10), inclusive, and can't be NULL or negative.

{ "sql_rest_options": { "retry_count": <number_of_retries> } }

For example, to set the retry_count to 3, you would write the following JSON string:

{ "sql_rest_options": { "retry_count": 3 } }

Retry count with other parameters

Retry count can also be combined with other parameters as long as it's a valid JSON string.

{ "dimensions": 725, "sql_rest_options": { "retry_count": 5 } }

Remarks

HTTPS and TLS

Only AI model inference endpoints that are configured to use HTTPS with TLS encryption protocol are supported for the LOCATION parameter.

Accepted API formats and model types

The following sections outline accepted API Formats for each MODEL_TYPE.

API_FORMAT for EMBEDDINGS

This table outlines the API Formats and URL endpoint structures for the EMBEDDINGS model type. To view specific payload structures, use the link in the API Format column.

API format Location path format
Azure OpenAI https://{endpoint}/openai/deployments/{deployment-id}/embeddings?api-version={date}
OpenAI https://{server_name}/v1/embeddings
Ollama https://localhost:{port}/api/embed

Create embedding endpoints

For more information on creating embedding endpoints, use these links for the appropriate AI model inference endpoint provider:

Credential name rules for EXTERNAL MODEL

The created DATABASE SCOPED CREDENTIAL used by an EXTERNAL MODEL must adhere to specific following rules:

  • Must be a valid URL

  • The URL domain must be one of those domains included in the allowlist

  • The URL must not contain a query string

  • Protocol + Fully Qualified Domain Name (FQDN) of the called URL must match Protocol + FQDN of the credential name

  • Each part of the called URL path must match completely with the respective part of URL path in the credential name

  • The credential must point to a path that is more generic than the request URL. For example, a credential created for path https://northwind.azurewebsite.net/customers can't be used for the URL https://northwind.azurewebsite.net

Collation and credential name rules

RFC 3986 Section 6.2.2.1 states that "When a URI uses components of the generic syntax, the component syntax equivalence rules always apply; namely, that the scheme and host are case-insensitive," and RFC 7230 Section 2.7.3 mentions that "all other are compared in a case-sensitive manner."

As there's a collation rule set at the database level, the following logic is applied, to be coherent with the database collation rule, and the RFC mentioned previously. (The described rule could potentially be more restrictive than the RFC rules, for example if database is set to use a case-sensitive collation):

  1. Check if the URL and credential match using the RFC, which means:

    • Check the scheme and host using a case-insensitive collation (Latin1_General_100_CI_AS_KS_WS_SC)
    • Check all other segments of the URL are compared in a case-sensitive collation (Latin1_General_100_BIN2)
  2. Check that the URL and credential match using the database collation rules (and without doing any URL encoding).

Managed Identity

To use Managed Identity for authentication on SQL Server 2025, you must enable the option by using sp_configure with a user that is granted the ALTER SETTINGS server-level permission.

EXECUTE sp_configure 'allow server scoped db credentials', 1;
RECONFIGURE WITH OVERRIDE;

SCHEMABINDING

Dropping views created with SCHEMABINDING and referencing an EXTERNAL MODEL (such as a SELECT statement using AI_GENERATE_EMBEDDINGS) is prevented with the Database Engine raising an error. The view definition itself must first be modified or dropped to remove dependencies referencing an EXTERNAL MODEL.

Catalog View

External Model metadata can be viewed with querying the sys.external_models catalog view. Note, you must have access to a model to be able to view the metadata.

SELECT * FROM sys.external_models;

Examples with remote endpoints

Create an EXTERNAL MODEL with Azure OpenAI using Managed Identity

This example creates an EXTERNAL MODEL of the EMBEDDINGS type using Azure OpenAI and uses Managed Identity for authentication.

Important

If using Managed Identity with Azure OpenAI and SQL Server 2025, the Cognitive Services OpenAI Contributor role must be granted to SQL Server's system-assigned managed identity enabled by Azure Arc. For more information, see Role-based access control for Azure OpenAI in Azure AI Foundry Models.

-- Create access credentials to Azure OpenAI using a managed identity:
CREATE DATABASE SCOPED CREDENTIAL [https://my-azure-openai-endpoint.openai.azure.com/]
    WITH IDENTITY = 'Managed Identity', secret = '{"resourceid":"https://cognitiveservices.azure.com"}';
GO

-- Create the EXTERNAL MODEL
CREATE EXTERNAL MODEL MyAzureOpenAiModel
AUTHORIZATION CRM_User
WITH (
      LOCATION = 'https://my-azure-openai-endpoint.openai.azure.com/openai/deployments/text-embedding-ada-002/embeddings?api-version=2024-02-01',
      API_FORMAT = 'Azure OpenAI',
      MODEL_TYPE = EMBEDDINGS,
      MODEL = 'text-embedding-ada-002',
      CREDENTIAL = [https://my-azure-openai-endpoint.openai.azure.com/]
);

Create an EXTERNAL MODEL with Azure OpenAI using API keys and parameters

This example creates an EXTERNAL MODEL of the EMBEDDINGS type using Azure OpenAI and uses API Keys for authentication. The example also uses PARAMETERS to set the dimensions parameter at the endpoint to 725.

-- Create access credentials to Azure OpenAI using a key:
CREATE DATABASE SCOPED CREDENTIAL [https://my-azure-openai-endpoint.openai.azure.com/]
    WITH IDENTITY = 'HTTPEndpointHeaders', secret = '{"api-key":"YOUR_AZURE_OPENAI_KEY"}';
GO

-- Create the EXTERNAL MODEL
CREATE EXTERNAL MODEL MyAzureOpenAiModel
AUTHORIZATION CRM_User
WITH (
      LOCATION = 'https://my-azure-openai-endpoint.openai.azure.com/openai/deployments/text-embedding-3-small/embeddings?api-version=2024-02-01',
      API_FORMAT = 'Azure OpenAI',
      MODEL_TYPE = EMBEDDINGS,
      MODEL = 'text-embedding-3-small',
      CREDENTIAL = [https://my-azure-openai-endpoint.openai.azure.com/],
      PARAMETERS = '{"dimensions":725}'
);

Create an EXTERNAL MODEL with Ollama and an explicit owner

This example creates an EXTERNAL MODEL of the EMBEDDINGS type using Ollama hosted locally for development purposes.

CREATE EXTERNAL MODEL MyOllamaModel
AUTHORIZATION AI_User
WITH (
      LOCATION = 'https://localhost:11435/api/embed',
      API_FORMAT = 'Ollama',
      MODEL_TYPE = EMBEDDINGS,
      MODEL = 'all-minilm'
);

Create an EXTERNAL MODEL with OpenAI

This example creates an EXTERNAL MODEL of the EMBEDDINGS type using the OpenAI API_FORMAT and HTTP header based credentials for authentication.

-- Create access credentials
CREATE DATABASE SCOPED CREDENTIAL [https://openai.com]
WITH IDENTITY = 'HTTPEndpointHeaders', secret = '{"Bearer":"YOUR_OPENAI_KEY"}';
GO

-- Create the external model
CREATE EXTERNAL MODEL MyAzureOpenAiModel
AUTHORIZATION CRM_User
WITH (
      LOCATION = 'https://api.openai.com/v1/embeddings',
      API_FORMAT = 'OpenAI',
      MODEL_TYPE = EMBEDDINGS,
      MODEL = 'text-embedding-ada-002',
      CREDENTIAL = [https://openai.com]
);

Example with a local ONNX runtime

This example guides you through setting up SQL Server 2025 with an ONNX runtime to enable local AI-powered text embedding generation. It only applies on Windows.

ONNX Runtime is an open-source inference engine that allows you to run machine learning models locally, making it ideal for integrating AI capabilities into SQL Server environments.

Important

This feature requires that SQL Server Machine Learning Services is installed.

Step 1: Enable developer preview features on SQL Server 2025

Run the following SQL command to enable SQL Server 2025 preview features in the database you would like use for this example:

ALTER DATABASE SCOPED CONFIGURATION  
SET PREVIEW_FEATURES = ON;

Step 2: Enable the local AI runtime on SQL Server 2025

Enable external AI runtimes by running the following SQL:

EXEC sp_configure 'external AI runtimes enabled', 1;  
RECONFIGURE WITH OVERRIDE;

Step 3: Set up the ONNX runtime library

Create a directory on the SQL Server to hold the ONNX runtime library files. In this example, C:\onnx_runtime is used.

You can use the following commands to create the directory:

cd C:\
mkdir onnx_runtime

Next, Download the ONNX Runtime (version ≥ 1.19) that is appropriate for your operating system. After unzipping the download, copy the onnxruntime.dll (located in the lib directory) to the C:\onnx_runtime directory that was created.

Step 4: Set up the tokenization library

Download and build the tokenizers-cpp library from GitHub. Once the dll is created, place the tokenizer in the C:\onnx_runtime directory.

Note

Ensure the created dll is named tokenizers_cpp.dll

Step 5: Download the ONNX model

Start by creating the model directory in C:\onnx_runtime\.

cd C:\onnx_runtime
mkdir model

This example uses the all-MiniLM-L6-v2-onnx model, which can be downloaded from Hugging Face.

Clone the repository into the C:\onnx_runtime\model directory with the following git command:

If not installed, you can download git from the following download link or via winget (winget install Microsoft.Git)

cd C:\onnx_runtime\model
git clone https://huggingface.co/nsense/all-MiniLM-L6-v2-onnx

Step 6: Set directory permissions

Use the following PowerShell script to provide the MSSQLLaunchpad user access to the ONNX runtime directory:

$AIExtPath = "C:\onnx_runtime";  
$Acl = Get-Acl -Path $AIExtPath  
$AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("MSSQLLaunchpad", "FullControl", "ContainerInherit,ObjectInherit", "None","Allow")  
$Acl.AddAccessRule($AccessRule)  
Set-Acl -Path $AIExtPath -AclObject $Acl

Step 7: Create the external model

Run the following SQL to register your ONNX model as an external model object:

The 'PARAMETERS' value used here is a placeholder needed for SQL Server 2025 RC 0.

CREATE EXTERNAL MODEL myLocalOnnxModel  
WITH (  
  LOCATION = 'C:\onnx_runtime\model\all-MiniLM-L6-v2-onnx',  
  API_FORMAT = 'ONNX Runtime',  
  MODEL_TYPE = EMBEDDINGS,  
  MODEL = 'allMiniLM',  
  PARAMETERS = '{"valid":"JSON"}',  
  LOCAL_RUNTIME_PATH = 'C:\onnx_runtime\'  
);

Important

LOCATION should point to the directory containing model.onnx and tokenizer.json files.

LOCAL_RUNTIME_PATH should point to directory containing onnxruntime.dll and tokenizer_cpp.dll files.

Step 8: Generate embeddings

Use the ai_generate_embeddings function to test the model by running the following SQL:

SELECT ai_generate_embeddings (N'Test Text' USE MODEL myLocalOnnxModel);

This command launches the AIRuntimeHost, load the required DLLs, and processes the input text.

The result from the SQL statement is an array of embeddings:

[0.320098,0.568766,0.154386,0.205526,-0.027379,-0.149689,-0.022946,-0.385856,-0.039183...]

Enable XEvent telemetry

Run the following SQL to enable telemetry for troubleshooting.

CREATE EVENT SESSION newevt
ON SERVER
ADD EVENT ai_generate_embeddings_airuntime_trace
(
    ACTION (sqlserver.sql_text, sqlserver.session_id)
)
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_SINGLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, TRACK_CAUSALITY = ON, STARTUP_STATE = OFF);
GO

ALTER EVENT SESSION newevt ON SERVER STATE = START;
GO

Next, use this SQL query see the captured telemetry:

SELECT  
    event_data.value('(@name)[1]', 'varchar(100)') AS event_name,
    event_data.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
    event_data.value('(data[@name="model_name"]/value)[1]', 'nvarchar(200)') AS model_name,
    event_data.value('(data[@name="phase_name"]/value)[1]', 'nvarchar(100)') AS phase,
    event_data.value('(data[@name="message"]/value)[1]', 'nvarchar(max)') AS message,
    event_data.value('(data[@name="request_id"]/value)[1]', 'nvarchar(max)') AS session_id,
    event_data.value('(data[@name="error_code"]/value)[1]', 'bigint') AS error_code
FROM (
    SELECT CAST(target_data AS XML) AS target_data
    FROM sys.dm_xe_sessions AS s
    JOIN sys.dm_xe_session_targets AS t  
        ON s.address = t.event_session_address
    WHERE s.name = 'newevt'
      AND t.target_name = 'ring_buffer'
) AS data
CROSS APPLY target_data.nodes('//RingBufferTarget/event') AS XEvent(event_data);

Clean up

To remove the external model object, run the following SQL:

DROP EXTERNAL MODEL myLocalOnnxModel;

To remove the directory permissions, run the following PowerShell commands:

$Acl.RemoveAccessRule($AccessRule)  
Set-Acl -Path $AIExtPath -AclObject $Acl

Finally, delete the C:/onnx_runtime directory.