Tekton Orchestrator

Orchestrating your pipelines to run on Tekton.

Tektonarrow-up-right is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.

circle-exclamation

When to use it

You should use the Tekton orchestrator if:

  • you're looking for a proven production-grade orchestrator.

  • you're looking for a UI in which you can track your pipeline runs.

  • you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.

  • you're willing to deploy and maintain Tekton Pipelines on your cluster.

How to deploy it

You'll first need to set up a Kubernetes cluster and deploy Tekton Pipelines:

circle-info

If one or more of the deployments are not in the Running state, try increasing the number of nodes in your cluster.

circle-exclamation

How to use it

To use the Tekton orchestrator, we need:

  • The ZenML tekton integration installed. If you haven't done so, run

  • Dockerarrow-up-right installed and running.

  • Tekton pipelines deployed on a remote cluster. See the deployment section for more information.

  • The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts.

  • A remote artifact storearrow-up-right as part of your stack.

  • kubectlarrow-up-right installed and the name of the Kubernetes configuration context which points to the target cluster (i.e. runkubectl config get-contexts to see a list of available contexts). This is optional (see below).

circle-info

It is recommended that you set up a Service Connectorarrow-up-right and use it to connect ZenML Stack Components to the remote Kubernetes cluster, especially If you are using a Kubernetes cluster managed by a cloud provider like AWS, GCP or Azure, This guarantees that your Stack is fully portable on other environments and your pipelines are fully reproducible.

We can then register the orchestrator and use it in our active stack. This can be done in two ways:

  1. If you have a Service Connectorarrow-up-right configured to access the remote Kubernetes cluster, you no longer need to set the kubernetes_context attribute to a local kubectl context. In fact, you don't need the local Kubernetes CLI at all. You can connect the stack component to the Service Connectorarrow-up-right instead:

  2. if you don't have a Service Connector on hand and you don't want to register onearrow-up-right , the local Kubernetes kubectl client needs to be configured with a configuration context pointing to the remote cluster. The kubernetes_context stack component must also be configured with the value of that context:

circle-info

ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Tekton. Check out this pagearrow-up-right if you want to learn more about how ZenML builds these images and how you can customize them.

You can now run any ZenML pipeline using the Tekton orchestrator:

Tekton UI

Tekton comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps.

Tekton UI

To find the Tekton UI endpoint, we can use the following command:

Additional configuration

For additional configuration of the Tekton orchestrator, you can pass TektonOrchestratorSettings which allows you to configure node selectors, affinity, and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries.

If your pipelines steps have certain hardware requirements, you can specify them as ResourceSettings:

These settings can then be specified on either pipeline-level or step-level:

Check out the SDK docsarrow-up-right for a full list of available attributes and this docs pagearrow-up-right for more information on how to specify settings.

For more information and a full list of configurable attributes of the Tekton orchestrator, check out the SDK Docsarrow-up-right .

Enabling CUDA for GPU-backed hardware

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this pagearrow-up-right to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.

ZenML Scarf

Last updated

Was this helpful?