Hello World

Your first ML pipeline with ZenML - from local development to cloud deployment in minutes.

This guide will help you build and deploy your first ZenML pipeline, starting locally and then transitioning to the cloud without changing your code. The same principles you'll learn here apply whether you're building classical ML models or AI agents.

1

Install ZenML

Start by installing ZenML in a fresh Python environment:

pip install zenml
zenml login

This gives you access to both the ZenML Python SDK and CLI tools. It also surfaces the ZenML dashboard + connects it to your local client.

2

Write your first pipeline

Create a simple run.py file with a basic workflow:

from zenml import step, pipeline


@step
def basic_step() -> str:
    """A simple step that returns a greeting message."""
    return "Hello World!"


@pipeline
def basic_pipeline():
    """A simple pipeline with just one step."""
    basic_step()


if __name__ == "__main__":
    basic_pipeline()
3

Create your ZenML account

Create a ZenML Pro account with a 14-day free trial (no payment information required). It will provide you with a dashboard to visualize pipelines, manage infrastructure, and collaborate with team members.

ZenML Pro Dashboard
The ZenML Pro Dashboard

First-time users will need to set up a workspace and project. This process might take a few minutes. In the meanwhile, feel free to check out the Core Concepts page to get familiar with the main ideas ZenML is built on. Once ready, connect your local environment:

# Log in and select your workspace
zenml login

# Activate your project
zenml project set <PROJECT_NAME>
4

Create your first remote stack

A "stack" in ZenML represents the infrastructure where your pipelines run. Moving from local to cloud resources is where ZenML truly shines.

ZenML Stack Deployment Options
Stack deployment options

The fastest way to create a cloud stack is through the Infrastructure-as-Code option. This uses Terraform to deploy cloud resources and register them as a ZenML stack.

You'll need:

  • Terraform version 1.9+ installed locally

  • Authentication configured for your preferred cloud provider (AWS, GCP, or Azure)

  • Appropriate permissions to create resources in your cloud account

The deployment wizard will guide you through each step.

5

Run your pipeline on the remote stack

Now run your pipeline in the cloud without changing any code.

First, activate your new stack:

zenml stack set <NAME_OF_YOUR_NEW_STACK>

Then run the exact same script:

python run.py

ZenML handles packaging code, building containers, orchestrating execution, and tracking artifacts automatically.

Pipeline Run in ZenML Dashboard
Your pipeline in the ZenML dashboard
6

What's next?

Congratulations! You've just experienced the core value proposition of ZenML:

  • Write Once, Run Anywhere: The same code runs locally during development and in the cloud for production

  • Unified Framework: Use the same MLOps principles for both classical ML models and AI agents

  • Separation of Concerns: Infrastructure configuration and ML code are completely decoupled, enabling independent evolution of each

  • Full Tracking: Every run, artifact, and model is automatically versioned and tracked - whether it's a scikit-learn model or a multi-agent system

To continue your ZenML journey, explore these key topics:

For All AI Workloads:

For LLMs and AI Agents:

  • LLMOps Guide: Write your first AI pipeline for agent development patterns

  • Agent Evaluation: Learn to systematically evaluate and compare different agent architectures

  • Prompt Management: Version and track prompts, tools, and agent configurations as artifacts

Infrastructure & Deployment:

Last updated

Was this helpful?