LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Overview
  • Integrations
  • Stack Components
    • Orchestrators
      • Local Orchestrator
      • Local Docker Orchestrator
      • Kubeflow Orchestrator
      • Kubernetes Orchestrator
      • Google Cloud VertexAI Orchestrator
      • AWS Sagemaker Orchestrator
      • AzureML Orchestrator
      • Databricks Orchestrator
      • Tekton Orchestrator
      • Airflow Orchestrator
      • Skypilot VM Orchestrator
      • HyperAI Orchestrator
      • Lightning AI Orchestrator
      • Develop a custom orchestrator
    • Artifact Stores
      • Local Artifact Store
      • Amazon Simple Cloud Storage (S3)
      • Google Cloud Storage (GCS)
      • Azure Blob Storage
      • Develop a custom artifact store
    • Container Registries
      • Default Container Registry
      • DockerHub
      • Amazon Elastic Container Registry (ECR)
      • Google Cloud Container Registry
      • Azure Container Registry
      • GitHub Container Registry
      • Develop a custom container registry
    • Step Operators
      • Amazon SageMaker
      • AzureML
      • Google Cloud VertexAI
      • Kubernetes
      • Modal
      • Spark
      • Develop a Custom Step Operator
    • Experiment Trackers
      • Comet
      • MLflow
      • Neptune
      • Weights & Biases
      • Google Cloud VertexAI Experiment Tracker
      • Develop a custom experiment tracker
    • Image Builders
      • Local Image Builder
      • Kaniko Image Builder
      • AWS Image Builder
      • Google Cloud Image Builder
      • Develop a Custom Image Builder
    • Alerters
      • Discord Alerter
      • Slack Alerter
      • Develop a Custom Alerter
    • Annotators
      • Argilla
      • Label Studio
      • Pigeon
      • Prodigy
      • Develop a Custom Annotator
    • Data Validators
      • Great Expectations
      • Deepchecks
      • Evidently
      • Whylogs
      • Develop a custom data validator
    • Feature Stores
      • Feast
      • Develop a Custom Feature Store
    • Model Deployers
      • MLflow
      • Seldon
      • BentoML
      • Hugging Face
      • Databricks
      • vLLM
      • Develop a Custom Model Deployer
    • Model Registries
      • MLflow Model Registry
      • Develop a Custom Model Registry
  • Service Connectors
    • Introduction
    • Complete guide
    • Best practices
    • Connector Types
      • Docker Service Connector
      • Kubernetes Service Connector
      • AWS Service Connector
      • GCP Service Connector
      • Azure Service Connector
      • HyperAI Service Connector
  • Popular Stacks
    • AWS
    • Azure
    • GCP
    • Kubernetes
  • Deployment
    • 1-click Deployment
    • Terraform Modules
    • Register a cloud stack
    • Infrastructure as code
  • Contribute
    • Custom Stack Component
    • Custom Integration
Powered by GitBook
On this page
  • When to use it
  • How to deploy it
  • How to use it

Was this helpful?

Edit on GitHub
  1. Stack Components
  2. Orchestrators

Tekton Orchestrator

Orchestrating your pipelines to run on Tekton.

PreviousDatabricks OrchestratorNextAirflow Orchestrator

Last updated 22 days ago

Was this helpful?

is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.

This component is only meant to be used within the context of a . Usage with a local ZenML deployment may lead to unexpected behavior!

When to use it

You should use the Tekton orchestrator if:

  • you're looking for a proven production-grade orchestrator.

  • you're looking for a UI in which you can track your pipeline runs.

  • you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.

  • you're willing to deploy and maintain Tekton Pipelines on your cluster.

How to deploy it

You'll first need to set up a Kubernetes cluster and deploy Tekton Pipelines:

  • A remote ZenML server. See the for more information.

  • Have an existing AWS set up.

  • Make sure you have the set up.

  • Download and kubectl and configure it to talk to your EKS cluster using the following command:

    aws eks --region REGION update-kubeconfig --name CLUSTER_NAME
  • Tekton Pipelines onto your cluster.

  • A remote ZenML server. See the for more information.

  • Have an existing GCP set up.

  • Make sure you have the set up first.

  • Download and kubectl and it to talk to your GKE cluster using the following command:

    gcloud container clusters get-credentials CLUSTER_NAME
  • Tekton Pipelines onto your cluster.

  • A remote ZenML server. See the for more information.

  • Have an existing set up.

  • Make sure you have the set up first.

  • Download and kubectl and it to talk to your AKS cluster using the following command:

    az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME
  • Tekton Pipelines onto your cluster.

If one or more of the deployments are not in the Running state, try increasing the number of nodes in your cluster.

ZenML has only been tested with Tekton Pipelines >=0.38.3 and may not work with previous versions.

How to use it

To use the Tekton orchestrator, we need:

  • The ZenML tekton integration installed. If you haven't done so, run

    zenml integration install tekton -y
  • The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts.

We can then register the orchestrator and use it in our active stack. This can be done in two ways:

  1. $ zenml orchestrator register <ORCHESTRATOR_NAME> --flavor tekton
    Running with active stack: 'default' (repository)
    Successfully registered orchestrator `<ORCHESTRATOR_NAME>`.
    
    $ zenml service-connector list-resources --resource-type kubernetes-cluster -e
    The following 'kubernetes-cluster' resources can be accessed by service connectors that you have configured:
    ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━┓
    ┃             CONNECTOR ID             │ CONNECTOR NAME        │ CONNECTOR TYPE │ RESOURCE TYPE         │ RESOURCE NAMES      ┃
    ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────┨
    ┃ e33c9fac-5daa-48b2-87bb-0187d3782cde │ aws-iam-multi-eu      │ 🔶 aws         │ 🌀 kubernetes-cluster │ kubeflowmultitenant ┃
    ┃                                      │                       │                │                       │ zenbox              ┃
    ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────┨
    ┃ ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 │ aws-iam-multi-us      │ 🔶 aws         │ 🌀 kubernetes-cluster │ zenhacks-cluster    ┃
    ┠──────────────────────────────────────┼───────────────────────┼────────────────┼───────────────────────┼─────────────────────┨
    ┃ 1c54b32a-4889-4417-abbd-42d3ace3d03a │ gcp-sa-multi          │ 🔵 gcp         │ 🌀 kubernetes-cluster │ zenml-test-cluster  ┃
    ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━┛
    
    $ zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-iam-multi-us
    Running with active stack: 'default' (repository)
    Successfully connected orchestrator `<ORCHESTRATOR_NAME>` to the following resources:
    ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━┓
    ┃             CONNECTOR ID             │ CONNECTOR NAME   │ CONNECTOR TYPE │ RESOURCE TYPE         │ RESOURCE NAMES   ┃
    ┠──────────────────────────────────────┼──────────────────┼────────────────┼───────────────────────┼──────────────────┨
    ┃ ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 │ aws-iam-multi-us │ 🔶 aws         │ 🌀 kubernetes-cluster │ zenhacks-cluster ┃
    ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━┛
    
    # Register and activate a stack with the new orchestrator
    $ zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
  2. zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=tekton --kubernetes_context=<KUBERNETES_CONTEXT>
    
    # Register and activate a stack with the new orchestrator
    zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set

You can now run any ZenML pipeline using the Tekton orchestrator:

python file_that_runs_a_zenml_pipeline.py

Tekton UI

Tekton comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps.

To find the Tekton UI endpoint, we can use the following command:

kubectl get ingress -n tekton-pipelines  -o jsonpath='{.items[0].spec.rules[0].host}'

Additional configuration

For additional configuration of the Tekton orchestrator, you can pass TektonOrchestratorSettings which allows you to configure node selectors, affinity, and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries.

from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings
from kubernetes.client.models import V1Toleration

tekton_settings = TektonOrchestratorSettings(
    pod_settings={
        "affinity": {
            "nodeAffinity": {
                "requiredDuringSchedulingIgnoredDuringExecution": {
                    "nodeSelectorTerms": [
                        {
                            "matchExpressions": [
                                {
                                    "key": "node.kubernetes.io/name",
                                    "operator": "In",
                                    "values": ["my_powerful_node_group"],
                                }
                            ]
                        }
                    ]
                }
            }
        },
        "tolerations": [
            V1Toleration(
                key="node.kubernetes.io/name",
                operator="Equal",
                value="",
                effect="NoSchedule"
            )
        ]
    }
)

If your pipelines steps have certain hardware requirements, you can specify them as ResourceSettings:

resource_settings = ResourceSettings(cpu_count=8, memory="16GB")

These settings can then be specified on either pipeline-level or step-level:

# Either specify on pipeline-level
@pipeline(
    settings={
        "orchestrator": tekton_settings,
        "resources": resource_settings,
    }
)
def my_pipeline():
    ...

# OR specify settings on step-level
@step(
    settings={
        "orchestrator": tekton_settings,
        "resources": resource_settings,
    }
)
def my_step():
    ...

Enabling CUDA for GPU-backed hardware

installed and running.

Tekton pipelines deployed on a remote cluster. See the for more information.

A as part of your stack.

A as part of your stack.

installed and the name of the Kubernetes configuration context which points to the target cluster (i.e. runkubectl config get-contexts to see a list of available contexts). This is optional (see below).

It is recommended that you set up and use it to connect ZenML Stack Components to the remote Kubernetes cluster, especially If you are using a Kubernetes cluster managed by a cloud provider like AWS, GCP or Azure, This guarantees that your Stack is fully portable on other environments and your pipelines are fully reproducible.

If you have configured to access the remote Kubernetes cluster, you no longer need to set the kubernetes_context attribute to a local kubectl context. In fact, you don't need the local Kubernetes CLI at all. You can instead:

if you don't have a Service Connector on hand and you don't want to , the local Kubernetes kubectl client needs to be configured with a configuration context pointing to the remote cluster. The kubernetes_context stack component must also be configured with the value of that context:

ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Tekton. Check out if you want to learn more about how ZenML builds these images and how you can customize them.

Check out the for a full list of available attributes and for more information on how to specify settings.

For more information and a full list of configurable attributes of the Tekton orchestrator, check out the .

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.

Tekton
remote ZenML deployment scenario
deployment guide
EKS cluster
AWS CLI
install
Install
deployment guide
GKE cluster
Google Cloud CLI
install
configure
Install
deployment guide
AKS cluster
az CLI
install
Install
Docker
remote artifact store
remote container registry
kubectl
a Service Connector
a Service Connector
connect the stack component to the Service Connector
register one
this page
SDK docs
this docs page
SDK Docs
the instructions on this page
deployment section
ZenML Scarf
Tekton UI