LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Overview
  • Integrations
  • Stack Components
    • Orchestrators
      • Local Orchestrator
      • Local Docker Orchestrator
      • Kubeflow Orchestrator
      • Kubernetes Orchestrator
      • Google Cloud VertexAI Orchestrator
      • AWS Sagemaker Orchestrator
      • AzureML Orchestrator
      • Databricks Orchestrator
      • Tekton Orchestrator
      • Airflow Orchestrator
      • Skypilot VM Orchestrator
      • HyperAI Orchestrator
      • Lightning AI Orchestrator
      • Develop a custom orchestrator
    • Artifact Stores
      • Local Artifact Store
      • Amazon Simple Cloud Storage (S3)
      • Google Cloud Storage (GCS)
      • Azure Blob Storage
      • Develop a custom artifact store
    • Container Registries
      • Default Container Registry
      • DockerHub
      • Amazon Elastic Container Registry (ECR)
      • Google Cloud Container Registry
      • Azure Container Registry
      • GitHub Container Registry
      • Develop a custom container registry
    • Step Operators
      • Amazon SageMaker
      • AzureML
      • Google Cloud VertexAI
      • Kubernetes
      • Modal
      • Spark
      • Develop a Custom Step Operator
    • Experiment Trackers
      • Comet
      • MLflow
      • Neptune
      • Weights & Biases
      • Google Cloud VertexAI Experiment Tracker
      • Develop a custom experiment tracker
    • Image Builders
      • Local Image Builder
      • Kaniko Image Builder
      • AWS Image Builder
      • Google Cloud Image Builder
      • Develop a Custom Image Builder
    • Alerters
      • Discord Alerter
      • Slack Alerter
      • Develop a Custom Alerter
    • Annotators
      • Argilla
      • Label Studio
      • Pigeon
      • Prodigy
      • Develop a Custom Annotator
    • Data Validators
      • Great Expectations
      • Deepchecks
      • Evidently
      • Whylogs
      • Develop a custom data validator
    • Feature Stores
      • Feast
      • Develop a Custom Feature Store
    • Model Deployers
      • MLflow
      • Seldon
      • BentoML
      • Hugging Face
      • Databricks
      • vLLM
      • Develop a Custom Model Deployer
    • Model Registries
      • MLflow Model Registry
      • Develop a Custom Model Registry
  • Service Connectors
    • Introduction
    • Complete guide
    • Best practices
    • Connector Types
      • Docker Service Connector
      • Kubernetes Service Connector
      • AWS Service Connector
      • GCP Service Connector
      • Azure Service Connector
      • HyperAI Service Connector
  • Popular Stacks
    • AWS
    • Azure
    • GCP
    • Kubernetes
  • Deployment
    • 1-click Deployment
    • Terraform Modules
    • Register a cloud stack
    • Infrastructure as code
  • Contribute
    • Custom Stack Component
    • Custom Integration
Powered by GitBook
On this page
  • When to use it
  • Orchestrator Flavors
  • How to use it

Was this helpful?

  1. Stack Components

Orchestrators

Orchestrating the execution of ML pipelines.

PreviousIntegrationsNextLocal Orchestrator

Last updated 21 days ago

Was this helpful?

The orchestrator is an essential component in any MLOps stack as it is responsible for running your machine learning pipelines. To do so, the orchestrator provides an environment that is set up to execute the steps of your pipeline. It also makes sure that the steps of your pipeline only get executed once all their inputs (which are outputs of previous steps of your pipeline) are available.

Many of ZenML's remote orchestrators build images in order to transport and execute your pipeline code. If you want to learn more about how Docker images are built by ZenML, check out .

When to use it

The orchestrator is a mandatory component in the ZenML stack. It is used to store all artifacts produced by pipeline runs, and you are required to configure it in all of your stacks.

Orchestrator Flavors

Out of the box, ZenML comes with a local orchestrator already part of the default stack that runs pipelines locally. Additional orchestrators are provided by integrations:

Orchestrator
Flavor
Integration
Notes

local

built-in

Runs your pipelines locally.

local_docker

built-in

Runs your pipelines locally using Docker.

kubernetes

kubernetes

Runs your pipelines in Kubernetes clusters.

kubeflow

kubeflow

Runs your pipelines using Kubeflow.

vertex

gcp

Runs your pipelines in Vertex AI.

sagemaker

aws

Runs your pipelines in Sagemaker.

azureml

azure

Runs your pipelines in AzureML.

tekton

tekton

Runs your pipelines using Tekton.

airflow

airflow

Runs your pipelines using Airflow.

vm_aws

skypilot[aws]

Runs your pipelines in AWS VMs using SkyPilot

vm_gcp

skypilot[gcp]

Runs your pipelines in GCP VMs using SkyPilot

vm_azure

skypilot[azure]

Runs your pipelines in Azure VMs using SkyPilot

hyperai

hyperai

Runs your pipeline in HyperAI.ai instances.

custom

Extend the orchestrator abstraction and provide your own implementation

If you would like to see the available flavors of orchestrators, you can use the command:

zenml orchestrator flavor list

How to use it

python file_that_runs_a_zenml_pipeline.py

Inspecting Runs in the Orchestrator UI

If your orchestrator comes with a separate user interface (for example Kubeflow, Airflow, Vertex), you can get the URL to the orchestrator UI of a specific pipeline run using the following code snippet:

from zenml.client import Client

pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>")
orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value

Specifying per-step resources

You don't need to directly interact with any ZenML orchestrator in your code. As long as the orchestrator that you want to use is part of your active , using the orchestrator is as simple as executing a Python file that :

If your steps require the orchestrator to execute them on specific hardware, you can specify them on your steps as described .

If your orchestrator of choice or the underlying hardware doesn't support this, you can also take a look at .

Docker
this guide
ZenML stack
runs a ZenML pipeline
here
step operators
LocalOrchestrator
LocalDockerOrchestrator
KubernetesOrchestrator
KubeflowOrchestrator
VertexOrchestrator
SagemakerOrchestrator
AzureMLOrchestrator
TektonOrchestrator
AirflowOrchestrator
SkypilotAWSOrchestrator
SkypilotGCPOrchestrator
SkypilotAzureOrchestrator
HyperAIOrchestrator
Custom Implementation
ZenML Scarf