Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Production machine learning requires more than training a good model — you need reliable workflows to move models from development through validation into production. Cross-workspace logging in Microsoft Fabric enables two key scenarios:
Build end-to-end MLOps workflows. Train and experiment in a development workspace, validate in a test workspace, and deploy to a production serving workspace — all using standard MLflow APIs. This separation of environments helps teams enforce quality gates and maintain clear audit trails from experimentation to production.
Bring existing ML assets into Fabric. If you already have trained models in Azure Databricks, Azure Machine Learning, a local environment, or any other platform that supports MLflow, you can log those experiments and models directly into a Fabric workspace. This makes it easy to consolidate your ML artifacts in one place without rebuilding your training pipelines.
Cross-workspace logging works through the synapseml-mlflow package, which provides a Fabric-compatible MLflow tracking plugin. You authenticate with your target workspace, set the tracking URI, and use standard MLflow commands — the same code you already know.
Note
Cross-workspace logging focuses on the code-first experience. UI integration for cross-workspace scenarios will be addressed in a future release.
Prerequisites
- A Microsoft Fabric subscription. Or, sign up for a free Microsoft Fabric trial.
- WRITE permission on the target Fabric workspace.
- Upgrade your machine learning tracking system for both source and target workspaces.
- For Fabric notebook scenarios: Create a new notebook and attach a Lakehouse before running code.
Tip
Cross-workspace logging is supported in workspaces with outbound access protection enabled. Cross-workspace logging to a different workspace requires a managed private endpoint. Logging within the same workspace and from outside Fabric works without additional configuration.
Install the MLflow plugin
The synapseml-mlflow package enables cross-workspace logging by providing the Fabric MLflow tracking plugin. Choose the install command based on your environment:
Important
MLflow 3 is not yet supported. You must pin mlflow-skinny to version 2.22.2 or earlier.
Use this command in a Fabric notebook to install the package with online notebook dependencies.
%pip install -U "synapseml-mlflow[online-notebook]" "mlflow-skinny<=2.22.2"
After installation, restart the kernel before running the remaining code.
Log MLflow objects to another Fabric workspace
In this scenario, you run a notebook in one Fabric workspace (source) and log experiments and models to a different Fabric workspace (target).
Set the target workspace
Set the MLFLOW_TRACKING_URI environment variable to point to your target workspace:
import os
target_workspace_id = "<your-target-workspace-id>"
target_uri = f"sds://api.fabric.microsoft.com/v1/workspaces/{target_workspace_id}/mlflow"
os.environ["MLFLOW_TRACKING_URI"] = target_uri
Log experiments and models
Create an experiment and log a run with parameters, metrics, and a model:
import mlflow
import mlflow.sklearn
import numpy as np
from sklearn.linear_model import LogisticRegression
from mlflow.models.signature import infer_signature
# Create or set the experiment in the target workspace
EXP_NAME = "my-cross-workspace-experiment"
MODEL_NAME = "my-cross-workspace-model"
mlflow.set_experiment(EXP_NAME)
with mlflow.start_run() as run:
lr = LogisticRegression()
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
lr.fit(X, y)
score = lr.score(X, y)
signature = infer_signature(X, y)
mlflow.log_params({
"objective": "classification",
"learning_rate": 0.05,
})
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(lr, "model", signature=signature)
mlflow.register_model(
f"runs:/{run.info.run_id}/model",
MODEL_NAME
)
After the run completes, the experiment and registered model appear in the target workspace.
Move MLflow objects between Fabric workspaces
In this scenario, you first log objects in the source workspace, then download the artifacts and re-log them to the target workspace. This is useful when you need to promote a trained model from a development workspace to a production workspace.
Step 1: Log objects in the source workspace
import mlflow
import mlflow.sklearn
import numpy as np
from sklearn.linear_model import LogisticRegression
from mlflow.models.signature import infer_signature
# Log to the current (source) workspace
EXP_NAME = "source-experiment"
mlflow.set_experiment(EXP_NAME)
with mlflow.start_run() as run:
lr = LogisticRegression()
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
lr.fit(X, y)
signature = infer_signature(X, y)
mlflow.sklearn.log_model(lr, "model", signature=signature)
source_run_id = run.info.run_id
Step 2: Download artifacts from the source run
import mlflow.artifacts
# Download the model artifacts locally
local_artifact_path = mlflow.artifacts.download_artifacts(
run_id=source_run_id,
artifact_path="model"
)
Step 3: Re-log artifacts to the target workspace
import os
target_workspace_id = "<your-target-workspace-id>"
target_uri = f"sds://api.fabric.microsoft.com/v1/workspaces/{target_workspace_id}/mlflow"
os.environ["MLFLOW_TRACKING_URI"] = target_uri
TARGET_EXP_NAME = "promoted-experiment"
TARGET_MODEL_NAME = "promoted-model"
mlflow.set_experiment(TARGET_EXP_NAME)
with mlflow.start_run() as run:
mlflow.log_artifacts(local_artifact_path, "model")
mlflow.register_model(
f"runs:/{run.info.run_id}/model",
TARGET_MODEL_NAME
)
Log MLflow objects from outside Fabric
You can log MLflow experiments and models to a Fabric workspace from any environment where you build your models, including:
- Local machines — VS Code, Jupyter notebooks, or any local Python environment.
- Azure Databricks — Databricks notebooks and jobs.
- Azure Machine Learning — Azure ML compute instances and pipelines.
- Any other platform — Any environment that supports Python and MLflow.
Step 1: Install the package
Install the synapseml-mlflow package in your environment:
pip install -U "synapseml-mlflow" "mlflow-skinny<=2.22.2"
Step 2: Authenticate with Fabric
Choose an authentication method based on your environment:
Use this method for local development environments with browser access, such as VS Code or Jupyter.
from fabric.analytics.environment.credentials import SetFabricAnalyticsDefaultTokenCredentialsGlobally
from azure.identity import DefaultAzureCredential
SetFabricAnalyticsDefaultTokenCredentialsGlobally(
credential=DefaultAzureCredential(exclude_interactive_browser_credential=False)
)
Step 3: Set target workspace and log MLflow objects
After authentication, set the tracking URI to point to your target Fabric workspace and log experiments and models using standard MLflow APIs:
import os
import mlflow
import mlflow.sklearn
import numpy as np
from sklearn.linear_model import LogisticRegression
from mlflow.models.signature import infer_signature
target_workspace_id = "<your-target-workspace-id>"
target_uri = f"sds://api.fabric.microsoft.com/v1/workspaces/{target_workspace_id}/mlflow"
os.environ["MLFLOW_TRACKING_URI"] = target_uri
EXP_NAME = "external-experiment"
MODEL_NAME = "external-model"
mlflow.set_experiment(EXP_NAME)
with mlflow.start_run() as run:
lr = LogisticRegression()
X = np.array([-2, -1, 0, 1, 2, 1]).reshape(-1, 1)
y = np.array([0, 0, 1, 1, 1, 0])
lr.fit(X, y)
signature = infer_signature(X, y)
mlflow.log_metric("score", lr.score(X, y))
mlflow.sklearn.log_model(lr, "model", signature=signature)
mlflow.register_model(
f"runs:/{run.info.run_id}/model",
MODEL_NAME
)
Use cross-workspace logging with outbound access protection
If your workspace has outbound access protection enabled, cross-workspace logging requires a cross-workspace managed private endpoint from the source workspace to the target workspace. Logging within the same workspace and logging from outside Fabric (local machines, Azure Databricks, Azure Machine Learning) work without additional configuration.
For details on supported scenarios and required configuration, see Workspace outbound access protection for Data Science.
Install the package in an OAP-enabled workspace
The standard %pip install command requires outbound internet access, which is blocked in OAP-enabled workspaces. To install the synapseml-mlflow package, first download it from a non-OAP environment, then upload it to the lakehouse.
Step 1: Download the package from a machine or workspace that has internet access.
%pip download synapseml-mlflow[online-notebook]
Step 2: Upload the downloaded files to the lakehouse in your OAP-enabled workspace. Upload all .whl files to the Files section of the lakehouse (for example, /lakehouse/default/Files).
Step 3: Install from the lakehouse path in your Fabric notebook.
%pip install --no-index --find-links=/lakehouse/default/Files "synapseml-mlflow[online-notebook]>2.0.0" "mlflow-skinny<=2.22.2" --pre
Known limitations
- WRITE permission required. You must have WRITE permission on the target workspace.
- Cross-workspace lineage not supported. You can't view relationships between notebooks, experiments, and models when these objects are logged from different workspaces.
- Source notebook not visible in target workspace. The source notebook doesn't appear in the target workspace. On artifact details and list pages, the link to the source notebook is empty.
- Item snapshots not supported. ML experiments or models logged to another workspace don't appear in the source run notebook item snapshot.
- Large language models not supported. Cross-workspace logging doesn't support large language models (LLMs).
Related content
- Machine learning experiments in Microsoft Fabric
- Track and manage machine learning models
- Upgrade your machine learning tracking system
- Autologging in Microsoft Fabric
- Machine learning experiments and models Git integration and deployment pipelines
- Workspace outbound access protection for Data Science