Your First AI Pipeline
Choose your path and build your first pipeline with ZenML in minutes.
Your First AI Pipeline
ZenML pipelines work the same for classical ML, AI agents, and hybrid approaches. Choose your path below to get started:
What do you want to build?
Choose one of the paths below. The same ZenML pipeline pattern works for all of them—the difference is in your steps and how you orchestrate them.
Build AI Agents - Use LLMs and tools to create autonomous agents
Build Classical ML Pipelines - Train and serve ML models with scikit-learn, TensorFlow, or PyTorch
Build Hybrid Systems - Combine ML classifiers with agents
Path 1: Build AI Agents
Use large language models, prompts, and tools to build intelligent autonomous agents that can reason, take action, and interact with your systems.
Architecture example
Path 2: Build Classical ML Pipelines
Use scikit-learn, TensorFlow, PyTorch, or other ML frameworks to build data processing, feature engineering, training, and inference pipelines.
Architecture example
Path 3: Build Hybrid Systems
Combine classical ML models and AI agents in a single pipeline. For example, use a classifier to route requests to specialized agents, or use agents to augment ML predictions.
Architecture example
Common Next Steps
Once you've chosen your path and gotten your first pipeline running:
Deploy remotely
All three paths use the same deployment pattern. Configure a remote stack and deploy:
# Create a remote stack (e.g., AWS)
zenml stack register my-remote-stack \
  --orchestrator aws-sagemaker \
  --artifact-store s3-bucket \
  --deployer aws
# Set it and deploy—your code doesn't change
zenml stack set my-remote-stackRun in batch mode with:
python run.pyDeploy as a real-time endpoint with:
zenml pipeline deploy pipelines.my_pipeline.my_pipeline --config deploy_config.yamlSee Deploying ZenML for cloud setup details.
View the dashboard
Start the dashboard to explore your pipeline runs:
zenml loginIn the dashboard, you'll see:
Pipeline DAGs: Visual representation of your steps and data flow
Artifacts: Versioned outputs from each step (models, reports, traces)
Metadata: Latency, tokens, metrics, or custom metadata you track
Timeline view: Compare step durations and identify bottlenecks
Core Concepts Recap
Regardless of which path you choose:
Pipelines - Orchestrate your workflow steps with automatic tracking
Steps - Modular, reusable units (data loading, model training, LLM inference, etc.)
Artifacts - Versioned outputs (models, predictions, traces, reports) with automatic logging
Stacks - Switch execution environments (local, remote, cloud) without code changes
Deployments - Turn pipelines into HTTP services with built-in UIs and monitoring
For deeper dives, explore the Concepts section in the docs.
Last updated
Was this helpful?