Orchestrators
Orchestrating the execution of ML pipelines.
The orchestrator is an essential component in any MLOps stack as it is responsible for running your machine learning pipelines. To do so, the orchestrator provides an environment that is set up to execute the steps of your pipeline. It also makes sure that the steps of your pipeline only get executed once all their inputs (which are outputs of previous steps of your pipeline) are available.
Many of ZenML's remote orchestrators build Docker images in order to transport and execute your pipeline code. If you want to learn more about how Docker images are built by ZenML, check out this guide.
When to use it
The orchestrator is a mandatory component in the ZenML stack. It is used to store all artifacts produced by pipeline runs, and you are required to configure it in all of your stacks.
Orchestrator Flavors
Out of the box, ZenML comes with a local
orchestrator already part of the default stack that runs pipelines locally. Additional orchestrators are provided by integrations:
local
built-in
Runs your pipelines locally.
local_docker
built-in
Runs your pipelines locally using Docker.
kubernetes
kubernetes
Runs your pipelines in Kubernetes clusters.
kubeflow
kubeflow
Runs your pipelines using Kubeflow.
vertex
gcp
Runs your pipelines in Vertex AI.
sagemaker
aws
Runs your pipelines in Sagemaker.
azureml
azure
Runs your pipelines in AzureML.
tekton
tekton
Runs your pipelines using Tekton.
airflow
airflow
Runs your pipelines using Airflow.
vm_aws
skypilot[aws]
Runs your pipelines in AWS VMs using SkyPilot
vm_gcp
skypilot[gcp]
Runs your pipelines in GCP VMs using SkyPilot
vm_azure
skypilot[azure]
Runs your pipelines in Azure VMs using SkyPilot
hyperai
hyperai
Runs your pipeline in HyperAI.ai instances.
custom
Extend the orchestrator abstraction and provide your own implementation
If you would like to see the available flavors of orchestrators, you can use the command:
How to use it
You don't need to directly interact with any ZenML orchestrator in your code. As long as the orchestrator that you want to use is part of your active ZenML stack, using the orchestrator is as simple as executing a Python file that runs a ZenML pipeline:
Inspecting Runs in the Orchestrator UI
If your orchestrator comes with a separate user interface (for example Kubeflow, Airflow, Vertex), you can get the URL to the orchestrator UI of a specific pipeline run using the following code snippet:
Specifying per-step resources
If your steps require the orchestrator to execute them on specific hardware, you can specify them on your steps as described here.
If your orchestrator of choice or the underlying hardware doesn't support this, you can also take a look at step operators.
Last updated