LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Overview
  • Integrations
  • Stack Components
    • Orchestrators
      • Local Orchestrator
      • Local Docker Orchestrator
      • Kubeflow Orchestrator
      • Kubernetes Orchestrator
      • Google Cloud VertexAI Orchestrator
      • AWS Sagemaker Orchestrator
      • AzureML Orchestrator
      • Databricks Orchestrator
      • Tekton Orchestrator
      • Airflow Orchestrator
      • Skypilot VM Orchestrator
      • HyperAI Orchestrator
      • Lightning AI Orchestrator
      • Develop a custom orchestrator
    • Artifact Stores
      • Local Artifact Store
      • Amazon Simple Cloud Storage (S3)
      • Google Cloud Storage (GCS)
      • Azure Blob Storage
      • Develop a custom artifact store
    • Container Registries
      • Default Container Registry
      • DockerHub
      • Amazon Elastic Container Registry (ECR)
      • Google Cloud Container Registry
      • Azure Container Registry
      • GitHub Container Registry
      • Develop a custom container registry
    • Step Operators
      • Amazon SageMaker
      • AzureML
      • Google Cloud VertexAI
      • Kubernetes
      • Modal
      • Spark
      • Develop a Custom Step Operator
    • Experiment Trackers
      • Comet
      • MLflow
      • Neptune
      • Weights & Biases
      • Google Cloud VertexAI Experiment Tracker
      • Develop a custom experiment tracker
    • Image Builders
      • Local Image Builder
      • Kaniko Image Builder
      • AWS Image Builder
      • Google Cloud Image Builder
      • Develop a Custom Image Builder
    • Alerters
      • Discord Alerter
      • Slack Alerter
      • Develop a Custom Alerter
    • Annotators
      • Argilla
      • Label Studio
      • Pigeon
      • Prodigy
      • Develop a Custom Annotator
    • Data Validators
      • Great Expectations
      • Deepchecks
      • Evidently
      • Whylogs
      • Develop a custom data validator
    • Feature Stores
      • Feast
      • Develop a Custom Feature Store
    • Model Deployers
      • MLflow
      • Seldon
      • BentoML
      • Hugging Face
      • Databricks
      • vLLM
      • Develop a Custom Model Deployer
    • Model Registries
      • MLflow Model Registry
      • Develop a Custom Model Registry
  • Service Connectors
    • Introduction
    • Complete guide
    • Best practices
    • Connector Types
      • Docker Service Connector
      • Kubernetes Service Connector
      • AWS Service Connector
      • GCP Service Connector
      • Azure Service Connector
      • HyperAI Service Connector
  • Popular Stacks
    • AWS
    • Azure
    • GCP
    • Kubernetes
  • Deployment
    • 1-click Deployment
    • Terraform Modules
    • Register a cloud stack
    • Infrastructure as code
  • Contribute
    • Custom Stack Component
    • Custom Integration
Powered by GitBook
On this page
  • When to use it
  • How to deploy it
  • How to use it

Was this helpful?

Edit on GitHub
  1. Stack Components
  2. Orchestrators

Local Docker Orchestrator

Orchestrating your pipelines to run in Docker.

PreviousLocal OrchestratorNextKubeflow Orchestrator

Last updated 22 days ago

Was this helpful?

The local Docker orchestrator is an flavor that comes built-in with ZenML and runs your pipelines locally using Docker.

When to use it

You should use the local Docker orchestrator if:

  • you want the steps of your pipeline to run locally in isolated environments.

  • you want to debug issues that happen when running your pipeline in Docker containers without waiting and paying for remote infrastructure.

How to deploy it

To use the local Docker orchestrator, you only need to have installed and running.

How to use it

To use the local Docker orchestrator, we can register it and use it in our active stack:

zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=local_docker

# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set

You can now run any ZenML pipeline using the local Docker orchestrator:

python file_that_runs_a_zenml_pipeline.py

Additional configuration

For example, if you wanted to specify the CPU count available for the Docker image (note: only configurable for Windows), you could write a simple pipeline like the following:

from zenml import step, pipeline
from zenml.orchestrators.local_docker.local_docker_orchestrator import (
    LocalDockerOrchestratorSettings,
)


@step
def return_one() -> int:
    return 1


settings = {
    "orchestrator": LocalDockerOrchestratorSettings(
        run_args={"cpu_count": 3}
    )
}


@pipeline(settings=settings)
def simple_pipeline():
    return_one()

Enabling CUDA for GPU-backed hardware

For additional configuration of the Local Docker orchestrator, you can pass LocalDockerOrchestratorSettings when defining or running your pipeline. Check out the for a full list of available attributes and for more information on how to specify settings. A full list of what can be passed in via the run_args can be found .

For more information and a full list of configurable attributes of the local Docker orchestrator, check out the .

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.

orchestrator
Docker
SDK docs
this docs page
in the Docker Python SDK documentation
SDK Docs
the instructions on this page
ZenML Scarf