0.23.0
Search…
⌃K
Links

Tekton Orchestrator

How to orchestrate pipelines with Tekton
The Tekton orchestrator is an orchestrator flavor provided with the ZenML tekton integration that uses Tekton Pipelines to run your pipelines.
This component is only meant to be used within the context of remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!

When to use it

You should use the Tekton orchestrator if:
  • you're looking for a proven production-grade orchestrator.
  • you're looking for a UI in which you can track your pipeline runs.
  • you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.
  • you're willing to deploy and maintain Tekton Pipelines on your cluster.

How to deploy it

You'll first need to set up a Kubernetes cluster and deploy Tekton Pipelines:
AWS
GCP
Azure
  • A remote ZenML server. See the deployment guide for more information.
  • Have an existing AWS EKS cluster set up.
  • Make sure you have the AWS CLI set up.
  • Download and install kubectl and configure it to talk to your EKS cluster using the following command:
    aws eks --region REGION update-kubeconfig --name CLUSTER_NAME
  • Install Tekton Pipelines onto your cluster.
  • A remote ZenML server. See the deployment guide for more information.
  • Have an existing GCP GKE cluster set up.
  • Make sure you have the Google Cloud CLI set up first.
  • Download and install kubectl and configure it to talk to your GKE cluster using the following command:
    gcloud container clusters get-credentials CLUSTER_NAME
  • Install Tekton Pipelines onto your cluster.
  • A remote ZenML server. See the deployment guide for more information.
  • Have an existing AKS cluster set up.
  • Make sure you have the az CLI set up first.
  • Download and install kubectl and it to talk to your AKS cluster using the following command:
    az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME
  • Install Tekton Pipelines onto your cluster.
If one or more of the deployments are not in the Running state, try increasing the number of nodes in your cluster.
ZenML has only been tested with Tekton Pipelines >=0.38.3 and may not work with previous versions.

How to use it

To use the Tekton orchestrator, we need:
  • The ZenML tekton integration installed. If you haven't done so, run
    zenml integration install tekton -y
  • Docker installed and running.
  • kubectl installed.
  • Tekton pipelines deployed on a remote cluster. See the deployment section for more information.
  • The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts.
  • A remote artifact store as part of your stack.
  • A remote container registry as part of your stack.
We can then register the orchestrator and use it in our active stack:
zenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=tekton \
--kubernetes_context=<KUBERNETES_CONTEXT>
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Tekton. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
Once the orchestrator is part of the active stack, we need to run zenml stack up before running any pipelines. This command forwards a port, so you can view the Tekton UI in your browser.
You can now run any ZenML pipeline using the Tekton orchestrator:
python file_that_runs_a_zenml_pipeline.py

Additional configuration

For additional configuration of the Tekton orchestrator, you can pass TektonOrchestratorSettings which allows you to configure (among others) the following attributes:
  • pod_settings: Node selectors, affinity and tolerations to apply to the Kubernetes Pods running your pipline. These can be either specified using the Kubernetes model objects or as dictionaries.
from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings
from kubernetes.client.models import V1Toleration
tekton_settings = TektonOrchestratorSettings(
pod_settings={
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "node.kubernetes.io/name",
"operator": "In",
"values": ["my_powerful_node_group"],
}
]
}
]
}
}
},
"tolerations": [
V1Toleration(
key="node.kubernetes.io/name",
operator="Equal",
value="",
effect="NoSchedule"
)
]
}
)
@pipeline(
settings={
"orchestrator.tekton": tekton_settings
}
)
...
Check out the API docs for a full list of available attributes and this docs page for more information on how to specify settings.
A concrete example of using the Tekton orchestrator can be found here.
For more information and a full list of configurable attributes of the Tekton orchestrator, check out the API Docs.

Enabling CUDA for GPU-backed hardware

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.