Logging and visualizing experiments with MLflow.

The MLflow Experiment Tracker is an Experiment Tracker flavor provided with the MLflow ZenML integration that uses the MLflow tracking service to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).

When would you want to use it?

MLflow Tracking is a very popular tool that you would normally use in the iterative ML experimentation phase to track and visualize experiment results. That doesn't mean that it cannot be repurposed to track and visualize the results produced by your automated pipeline runs, as you make the transition toward a more production-oriented workflow.

You should use the MLflow Experiment Tracker:

  • if you have already been using MLflow to track experiment results for your project and would like to continue doing so as you are incorporating MLOps workflows and best practices in your project through ZenML.

  • if you are looking for a more visually interactive way of navigating the results produced from your ZenML pipeline runs (e.g. models, metrics, datasets)

  • if you or your team already have a shared MLflow Tracking service deployed somewhere on-premise or in the cloud, and you would like to connect ZenML to it to share the artifacts and metrics logged by your pipelines

You should consider one of the other Experiment Tracker flavors if you have never worked with MLflow before and would rather use another experiment tracking tool that you are more familiar with.

How do you deploy it?

The MLflow Experiment Tracker flavor is provided by the MLflow ZenML integration, you need to install it on your local machine to be able to register an MLflow Experiment Tracker and add it to your stack:

zenml integration install mlflow -y

The MLflow Experiment Tracker can be configured to accommodate the following MLflow deployment scenarios:

  • Scenario 1: This scenario requires that you use a local Artifact Store alongside the MLflow Experiment Tracker in your ZenML stack. The local Artifact Store comes with limitations regarding what other types of components you can use in the same stack. This scenario should only be used to run ZenML locally and is not suitable for collaborative and production settings. No parameters need to be supplied when configuring the MLflow Experiment Tracker, e.g:

# Register the MLflow experiment tracker
zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow

# Register and set a stack with the new experiment tracker
zenml stack register custom_stack -e mlflow_experiment_tracker ... --set
  • Scenario 5: This scenario assumes that you have already deployed an MLflow Tracking Server enabled with proxied artifact storage access. There is no restriction regarding what other types of components it can be combined with. This option requires authentication-related parameters to be configured for the MLflow Experiment Tracker.

Due to a critical severity vulnerability found in older versions of MLflow, we recommend using MLflow version 2.2.1 or higher.

Infrastructure Deployment

The MLflow Experiment Tracker can be deployed directly from the ZenML CLI:

# optionally assigning an existing bucket to the MLflow Experiment Tracker
zenml experiment-tracker deploy mlflow_tracker --flavor=mlflow -x mlflow_bucket=gs://my_bucket --provider=<YOUR_PROVIDER>

You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.

Authentication Methods

You need to configure the following credentials for authentication to a remote MLflow tracking server:

  • tracking_uri: The URL pointing to the MLflow tracking server. If using an MLflow Tracking Server managed by Databricks, then the value of this attribute should be "databricks".

  • tracking_username: Username for authenticating with the MLflow tracking server.

  • tracking_password: Password for authenticating with the MLflow tracking server.

  • tracking_token (in place of tracking_username and tracking_password): Token for authenticating with the MLflow tracking server.

  • tracking_insecure_tls (optional): Set to skip verifying the MLflow tracking server SSL certificate.

  • databricks_host: The host of the Databricks workspace with the MLflow-managed server to connect to. This is only required if the tracking_uri value is set to "databricks". More information: Access the MLflow tracking server from outside Databricks

Either tracking_token or tracking_username and tracking_password must be specified.

This option configures the credentials for the MLflow tracking service directly as stack component attributes.

This is not recommended for production settings as the credentials won't be stored securely and will be clearly visible in the stack configuration.

# Register the MLflow experiment tracker
zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ 
    --tracking_uri=<URI> --tracking_token=<token>

# You can also register it like this:
# zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ 
#    --tracking_uri=<URI> --tracking_username=<USERNAME> --tracking_password=<PASSWORD>

# Register and set a stack with the new experiment tracker
zenml stack register custom_stack -e mlflow_experiment_tracker ... --set

For more, up-to-date information on the MLflow Experiment Tracker implementation and its configuration, you can have a look at the SDK docs .

How do you use it?

To be able to log information from a ZenML pipeline step using the MLflow Experiment Tracker component in the active stack, you need to enable an experiment tracker using the @step decorator. Then use MLflow's logging or auto-logging capabilities as you would normally do, e.g.:

import mlflow

def tf_trainer(
    x_train: np.ndarray,
    y_train: np.ndarray,
) -> tf.keras.Model:
    """Train a neural net from scratch to recognize MNIST digits return our
    model or the learner"""

    # compile model


    # train model

    # log additional information to MLflow explicitly if needed


    return model

Instead of hardcoding an experiment tracker name, you can also use the Client to dynamically use the experiment tracker of your active stack:

from zenml.client import Client

experiment_tracker = Client().active_stack.experiment_tracker

def tf_trainer(...):

MLflow UI

MLflow comes with its own UI that you can use to find further details about your tracked experiments.

You can find the URL of the MLflow experiment linked to a specific ZenML run via the metadata of the step in which the experiment tracker was used:

from zenml.client import Client

last_run = client.get_pipeline("<PIPELINE_NAME>").last_run
trainer_step = last_run.get_step("<STEP_NAME>")
tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value

This will be the URL of the corresponding experiment in your deployed MLflow instance, or a link to the corresponding mlflow experiment file if you are using local MLflow.

If you are using local MLflow, you can use the mlflow ui command to start MLflow at localhost:5000 where you can then explore the UI in your browser.

mlflow ui --backend-store-uri <TRACKING_URL>

Additional configuration

For additional configuration of the MLflow experiment tracker, you can pass MLFlowExperimentTrackerSettings to create nested runs or add additional tags to your MLflow runs:

import mlflow
from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings

mlflow_settings = MLFlowExperimentTrackerSettings(
    tags={"key": "value"}

        "experiment_tracker.mlflow": mlflow_settings
def step_one(
    data: np.ndarray,
) -> np.ndarray:

Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.

Last updated