Kubeflow Orchestrator
How to orchestrate pipelines with Kubeflow
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
The Kubeflow orchestrator is an orchestrator flavor provided with the ZenML kubeflow
integration that uses Kubeflow Pipelines to run your pipelines.
When to use it
You should use the Kubeflow orchestrator if:
you're looking for a proven production-grade orchestrator.
you're looking for a UI in which you can track your pipeline runs.
you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.
you're willing to deploy and maintain Kubeflow Pipelines on your cluster.
How to deploy it
The Kubeflow orchestrator supports two different modes: Local
and remote
. In case you want to run the orchestrator on a local Kubernetes cluster running on your machine, there is no additional infrastructure setup necessary.
If you want to run your pipelines on a remote cluster instead, you'll need to set up a Kubernetes cluster and deploy Kubeflow Pipelines:
Have an existing AWS EKS cluster set up.
Make sure you have the AWS CLI set up.
Install Kubeflow Pipelines onto your cluster.
If one or more of the deployments are not in the Running
state, try increasing the number of nodes in your cluster.
If you're installing Kubeflow Pipelines manually, make sure the Kubernetes service is called exactly ml-pipeline
. This is a requirement for ZenML to connect to your Kubeflow Pipelines deployment.
How to use it
To use the Kubeflow orchestrator, we need:
The ZenML
kubeflow
integration installed. If you haven't done so, runDocker installed and running.
kubectl installed.
When using the Kubeflow orchestrator locally, you'll additionally need
K3D installed to spin up a local Kubernetes cluster.
A local container registry as part of your stack.
The local Kubeflow Pipelines deployment requires more than 2 GB of RAM, so if you're using Docker Desktop make sure to update the resource limits in the preferences.
We can then register the orchestrator and use it in our active stack:
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>
which includes your code and use it to run your pipeline steps in Kubeflow. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
Once the orchestrator is part of the active stack, we need to run zenml stack up
before running any pipelines. This command
forwards a port so you can view the Kubeflow UI in your browser.
(in the local case) uses K3D to provision a Kubernetes cluster on your machine and deploys Kubeflow Pipelines on it.
You can now run any ZenML pipeline using the Kubeflow orchestrator:
A concrete example of using the Kubeflow orchestrator can be found here.
For more information and a full list of configurable attributes of the Kubeflow orchestrator, check out the API Docs.
Last updated