Your First AI Pipeline
Choose your path and build your first pipeline with ZenML in minutes.
Your First AI Pipeline
ZenML pipelines work the same for classical ML, AI agents, and hybrid approaches. Choose your path below to get started:
Why ZenML pipelines?
Reproducible & portable: Run the same code locally or on the cloud by switching stacks.
One approach for models and agents: Steps, pipelines, and artifacts work for sklearn, classical ML, and LLMs alike.
Observe by default: Lineage and step metadata (e.g., latency, tokens, metrics) are tracked and visible in the dashboard.
What do you want to build?
Choose one of the paths below. The same ZenML pipeline pattern works for all of them—the difference is in your steps and how you orchestrate them.
Build AI Agents - Use LLMs and tools to create autonomous agents
Build Classical ML Pipelines - Train and serve ML models with scikit-learn, TensorFlow, or PyTorch
Build Hybrid Systems - Combine ML classifiers with agents
Path 1: Build AI Agents
Use large language models, prompts, and tools to build intelligent autonomous agents that can reason, take action, and interact with your systems.
Architecture example
View Quick Start & Examples
Quick start
Then follow the guide in examples/deploying_agent:
Define your steps: Use LLM APIs (OpenAI, Claude, etc.) to build reasoning steps
Deploy as HTTP service: Turn your agent into a managed endpoint
Invoke and monitor: Use the CLI, curl, or the embedded web UI to interact with your agent
Inspect traces: View agent reasoning, tool calls, and metadata in the ZenML dashboard
Example output
Automated document analysis (see
deploying_agent)Multi-turn chatbots with context
Autonomous workflows with tool integrations
Agentic RAG systems with retrieval steps
Related examples
agent_outer_loop: Combine ML classifiers with agents for hybrid intelligent systems
agent_comparison: Compare different agent architectures and LLM providers
agent_framework_integrations: Integrate with popular agent frameworks
llm_finetuning: Fine-tune LLMs for specialized tasks
Path 2: Build Classical ML Pipelines
Use scikit-learn, TensorFlow, PyTorch, or other ML frameworks to build data processing, feature engineering, training, and inference pipelines.
Architecture example
View Quick Start & Examples
Quick start
Then follow the guide in examples/deploying_ml_model:
Build your pipeline: Data loading → preprocessing → training → evaluation
Deploy the model: Serve your trained model as a real-time HTTP endpoint
Monitor performance: Track predictions, latency, and data drift in the dashboard
Iterate: Retrain and redeploy without code changes—just switch your orchestrator
Example output
Predictive models (regression, classification)
Time series forecasting
NLP pipelines (sentiment analysis, text classification)
Computer vision workflows
Model scoring and ranking systems
Related examples
e2e: End-to-end ML pipeline with data validation and model deployment
e2e_nlp: Domain-specific NLP pipeline example
mlops_starter: Production-ready MLOps setup with monitoring and governance
Path 3: Build Hybrid Systems
Combine classical ML models and AI agents in a single pipeline. For example, use a classifier to route requests to specialized agents, or use agents to augment ML predictions.
Architecture example
View Quick Start & Examples
Quick start
Then follow the guide in examples/agent_outer_loop:
Define both components: Classical ML classifier + AI agent steps
Wire them together: Use the classifier output to influence agent behavior
Deploy as one service: The entire hybrid system becomes a single endpoint
Monitor both: Track ML metrics and agent traces in the same dashboard
Example output
Intent classification with specialized agent handling
Upgrade paths: generic agent → train classifier → automatic routing
Ensemble systems combining multiple models and agents
Fact-checking pipelines with verification steps
Related examples
agent_outer_loop: Full hybrid example with automatic intent detection
deploying_agent: Start here for the agent piece
deploying_ml_model: Start here for the ML piece
Common Next Steps
Once you've chosen your path and gotten your first pipeline running:
Deploy remotely
All three paths use the same deployment pattern. Configure a remote stack and deploy:
Run in batch mode with:
Deploy as a real-time endpoint with:
See Deploying ZenML for cloud setup details.
View the dashboard
Start the dashboard to explore your pipeline runs:
In the dashboard, you'll see:
Pipeline DAGs: Visual representation of your steps and data flow
Artifacts: Versioned outputs from each step (models, reports, traces)
Metadata: Latency, tokens, metrics, or custom metadata you track
Timeline view: Compare step durations and identify bottlenecks
Core Concepts Recap
Regardless of which path you choose:
Pipelines - Orchestrate your workflow steps with automatic tracking
Steps - Modular, reusable units (data loading, model training, LLM inference, etc.)
Artifacts - Versioned outputs (models, predictions, traces, reports) with automatic logging
Stacks - Switch execution environments (local, remote, cloud) without code changes
Deployments - Turn pipelines into HTTP services with built-in UIs and monitoring
For deeper dives, explore the Concepts section in the docs.
Last updated
Was this helpful?