Tekton Orchestrator
How to orchestrate pipelines with Tekton
The Tekton orchestrator is an orchestrator flavor provided with the ZenML
tekton
integration that uses Tekton Pipelines to run your pipelines.This component is only meant to be used within the context of remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
You should use the Tekton orchestrator if:
- you're looking for a proven production-grade orchestrator.
- you're looking for a UI in which you can track your pipeline runs.
- you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.
- you're willing to deploy and maintain Tekton Pipelines on your cluster.
You'll first need to set up a Kubernetes cluster and deploy Tekton Pipelines:
AWS
GCP
Azure
- az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME
If one or more of the deployments are not in the
Running
state, try increasing the number of nodes in your cluster.ZenML has only been tested with Tekton Pipelines >=0.38.3 and may not work with previous versions.
To use the Tekton orchestrator, we need:
- The ZenML
tekton
integration installed. If you haven't done so, runzenml integration install tekton -y - The name of your Kubernetes context which points to your remote cluster. Run
kubectl config get-contexts
to see a list of available contexts.
We can then register the orchestrator and use it in our active stack:
zenml orchestrator register <ORCHESTRATOR_NAME> \
--flavor=tekton \
--kubernetes_context=<KUBERNETES_CONTEXT>
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
ZenML will build a Docker image called
<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>
which includes your code and use it to run your pipeline steps in Tekton. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.You can now run any ZenML pipeline using the Tekton orchestrator:
python file_that_runs_a_zenml_pipeline.py
For additional configuration of the Tekton orchestrator, you can pass
TektonOrchestratorSettings
which allows you to configure (among others) the following attributes:pod_settings
: Node selectors, affinity and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries.
from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings
from kubernetes.client.models import V1Toleration
tekton_settings = TektonOrchestratorSettings(
pod_settings={
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "node.kubernetes.io/name",
"operator": "In",
"values": ["my_powerful_node_group"],
}
]
}
]
}
}
},
"tolerations": [
V1Toleration(
key="node.kubernetes.io/name",
operator="Equal",
value="",
effect="NoSchedule"
)
]
}
)
@pipeline(
settings={
"orchestrator.tekton": tekton_settings
}
)
...
Check out the API docs for a full list of available attributes and this docs page for more information on how to specify settings.
For more information and a full list of configurable attributes of the Tekton orchestrator, check out the API Docs.
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
Last modified 2d ago