Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
APPLIES TO:
Azure CLI ml extension v2 (current)
Python SDK azure-ai-ml v2 (current)
Deploy an ONNX model to an Azure Machine Learning managed online endpoint by using NVIDIA Triton Inference Server for optimized, no-code inference. Triton handles model serving for popular frameworks like TensorFlow, ONNX Runtime, PyTorch, and NVIDIA TensorRT, and you can use it for CPU or GPU workloads.
There are two approaches for deploying Triton models to online endpoints:
- **No-code deployment—Bring only Triton models. No scoring script or custom environment required.
- **Full-code deployment (bring your own container)—Full control over Triton Inference Server configuration.
For both options, Triton Inference Server performs inferencing based on the Triton model repository structure defined by NVIDIA. You can use ensemble models for more advanced scenarios. Azure Machine Learning supports Triton in both managed online endpoints and Kubernetes online endpoints.
This article walks through no-code deployment by using the Azure CLI, Python SDK v2, and Azure Machine Learning studio. For full-code deployment with a custom Triton container, see Use a custom container to deploy a model and the BYOC example for Triton (deployment definition and end-to-end script).
Note
Use of the NVIDIA Triton Inference Server container is governed by the NVIDIA AI Enterprise Software license agreement and you can use it for 90 days without an enterprise product subscription. For more information, see NVIDIA AI Enterprise on Azure Machine Learning.
Prerequisites
The Azure CLI and the
mlextension to the Azure CLI, installed and configured. For more information, see Install and set up the CLI (v2).A Bash shell or a compatible shell, for example, a shell on a Linux system or Windows Subsystem for Linux. The Azure CLI examples in this article assume that you use this type of shell.
An Azure Machine Learning workspace. For instructions to create a workspace, see Set up.
Your Azure account must have the Owner or Contributor role on the Azure Machine Learning workspace, or a custom role that allows
Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*. For more information, see Manage access to Azure Machine Learning workspaces.A working Python 3.10 or later environment.
You must have other Python packages installed for scoring. They include:
- NumPy. An array and numerical computing library.
- Triton Inference Server Client. Facilitates requests to the Triton Inference Server.
- Pillow. A library for image operations.
- Gevent. A networking library used for connecting to the Triton server.
pip install numpy pip install tritonclient[http] pip install pillow pip install geventAccess to NCasT4_v3-series VMs for your Azure subscription.
Important
You might need to request a quota increase for your subscription before you can use this series of VMs. For more information, see NCasT4_v3-series.
NVIDIA Triton Inference Server requires a specific model repository structure, where there's a directory for each model and subdirectories for the model versions. The contents of each model version subdirectory is determined by the type of the model and the requirements of the backend that supports the model. For information about the structure for all models, see Model Files.
The example in this article uses a model stored in ONNX format. The model repository follows this structure:
models/ └── model_1/ └── 1/ └── model.onnxFor no-code deployment, Triton autogenerates the model configuration (
config.pbtxt). If you need to customize the configuration, use full-code deployment with a custom container instead.
The information in this article is based on code samples contained in the azureml-examples repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the cli directory in the repo:
git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples
cd cli
If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, use the following commands. Replace the following parameters with values for your specific configuration:
- Replace
<subscription>with your Azure subscription ID. - Replace
<workspace>with your Azure Machine Learning workspace name. - Replace
<resource-group>with the Azure resource group that contains your workspace. - Replace
<location>with the Azure region that contains your workspace.
Tip
You can see what your current defaults are by using the az configure -l command.
az account set --subscription <subscription>
az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
Define the deployment configuration
Configure the endpoint and deployment resources that define how Triton serves your model. The endpoint specifies the name and authentication mode, while the deployment defines the model, VM type, and instance count.
Tip
This example uses key-based authentication for simplicity. For production deployments, Microsoft recommends Microsoft Entra token-based authentication (aad_token), which provides enhanced security through identity-based access control. For more information, see Authenticate clients for online endpoints.
APPLIES TO:
Azure CLI ml extension v2 (current)
Important
For Triton no-code-deployment, testing via local endpoints isn't currently supported.
Set a
BASE_PATHenvironment variable. This variable points to the directory where the model and associated YAML configuration files are located:BASE_PATH=endpoints/online/triton/single-modelSet the name of the endpoint. In this example, a random name is created for the endpoint:
export ENDPOINT_NAME=triton-single-endpt-`echo $RANDOM`Create a YAML configuration file for your endpoint. The following example configures the name and authentication mode of the endpoint. The file is located at
/cli/endpoints/online/triton/single-model/create-managed-endpoint.yamlin the azureml-examples repo you cloned earlier:create-managed-endpoint.yaml
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json name: my-endpoint auth_mode: aml_tokenCreate a YAML configuration file for the deployment. The following example configures a deployment named
blueto the endpoint. The file is located at/cli/endpoints/online/triton/single-model/create-managed-deployment.yamlin the azureml-examples repo:Important
For Triton no-code-deployment to work, set
typetotriton_model:type: triton_model. For more information, see CLI (v2) model YAML schema.This deployment uses a Standard_NC4as_T4_v3 VM. You might need to request a quota increase for your subscription before you can use this VM. For more information, see NCasT4_v3-series.
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json name: blue endpoint_name: my-endpoint model: name: sample-densenet-onnx-model version: 1 path: ./models type: triton_model instance_count: 1 instance_type: STANDARD_NC4AS_T4_V3
Deploy to Azure
Create the endpoint and deployment resources in Azure by using the configuration from the previous section.
APPLIES TO:
Azure CLI ml extension v2 (current)
Create the endpoint by using the YAML configuration:
az ml online-endpoint create -n $ENDPOINT_NAME -f $BASE_PATH/create-managed-endpoint.yamlCreate the deployment by using the YAML configuration:
az ml online-deployment create --name blue --endpoint $ENDPOINT_NAME -f $BASE_PATH/create-managed-deployment.yaml --all-traffic
Test the endpoint
After deployment finishes, send a scoring request to verify the endpoint returns predictions. Triton uses the Triton Client protocol instead of standard REST JSON, so you score with the tritonclient library on the client side.
APPLIES TO:
Azure CLI ml extension v2 (current)
Tip
The file /cli/endpoints/online/triton/single-model/triton_densenet_scoring.py in the azureml-examples repo is used for scoring. The image you pass to the endpoint needs preprocessing to meet the size, type, and format requirements, and post-processing to show the predicted label. The triton_densenet_scoring.py file uses the tritonclient.http library to communicate with the Triton inference server. This file runs on the client side.
Get the endpoint scoring URI:
scoring_uri=$(az ml online-endpoint show -n $ENDPOINT_NAME --query scoring_uri -o tsv) scoring_uri=${scoring_uri%/*}Get an authentication token:
auth_token=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME --query accessToken -o tsv)Score data with the endpoint. This command submits the image of a peacock to the endpoint:
python $BASE_PATH/triton_densenet_scoring.py --base_url=$scoring_uri --token=$auth_token --image_path $BASE_PATH/data/peacock.jpgThe response from the script is similar to the following response:
Is server ready - True Is model ready - True /azureml-examples/cli/endpoints/online/triton/single-model/densenet_labels.txt 84 : PEACOCK
Delete the endpoint and model
APPLIES TO:
Azure CLI ml extension v2 (current)
When you're done with the endpoint, delete it:
az ml online-endpoint delete -n $ENDPOINT_NAME --yesArchive your model:
az ml model archive --name sample-densenet-onnx-model --version 1
Related content
- Use a custom container to deploy a model to an online endpoint
- Deploy models with REST
- Deploy and score a machine learning model by using an online endpoint
- Safe rollout for online endpoints
- Autoscale managed online endpoints
- View costs for an Azure Machine Learning managed online endpoint
- Access Azure resources from an online endpoint with a managed identity
- Troubleshoot online endpoint deployment and scoring
- NVIDIA Triton Inference Server model repository documentation