Orchestrators

How to orchestrate ML pipelines

This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.

The orchestrator is an essential component in any MLOps stack as it is responsible for running your machine learning pipelines. To do so, the orchestrator provides an environment which is set up to execute the steps of your pipeline. It also makes sure that the steps of your pipeline only get executed once all their inputs (which are outputs of previous steps of your pipeline) are available.

Many of ZenML's remote orchestrators build Docker images in order to transport and execute your pipeline code. If you want to learn more about how Docker images are built by ZenML, check out this guide.

When to use it

The orchestrator is a mandatory component in the ZenML stack. It is used to store all artifacts produced by pipeline runs and you are required to configure it in all of your stacks.

Orchestrator Flavors

Out of the box, ZenML comes with a local orchestrator already part of the default stack that runs pipelines locally. Additional orchestrators are provided by integrations:

OrchestratorFlavorIntegrationNotes

local

built-in

Runs your pipelines locally.

local_docker

built-in

Runs your pipelines locally using Docker.

kubernetes

kubernetes

Runs your pipelines in Kubernetes clusters.

kubeflow

kubeflow

Runs your pipelines using Kubeflow.

vertex

gcp

Runs your pipelines in Vertex AI.

airflow

airflow

Runs your pipelines locally using Airflow.

github

github

Runs your pipelines using GitHub Actions.

If you would like to see the available flavors of orchestrators, you can use the command:

zenml orchestrator flavor list

How to use it

You don't need to directly interact with any ZenML orchestrator in your code. As long as the orchestrator that you want to use is part of your active ZenML stack, using the orchestrator is as simple as executing a python file which runs a ZenML pipeline:

python file_that_runs_a_zenml_pipeline.py

Specifying per-step resources

If some of your steps require the orchestrator to execute them on specific hardware, you can specify them on your steps as described here.

If your orchestrator of choice or the underlying hardware doesn't support this, you can also take a look at step operators.

Last updated