This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
The Kubeflow orchestrator is an orchestrator flavor provided with the ZenML kubeflow integration that uses Kubeflow Pipelines to run your pipelines.
When to use it
You should use the Kubeflow orchestrator if:
you're looking for a proven production-grade orchestrator.
you're looking for a UI in which you can track your pipeline runs.
you're already using Kubernetes or are not afraid of setting up and maintaining a Kubernetes cluster.
you're willing to deploy and maintain Kubeflow Pipelines on your cluster.
How to deploy it
The Kubeflow orchestrator supports two different modes: Local and remote. In case you want to run the orchestrator on a local Kubernetes cluster running on your machine, there is no additional infrastructure setup necessary.
If you want to run your pipelines on a remote cluster instead, you'll need to set up a Kubernetes cluster and deploy Kubeflow Pipelines:
. However, the workflow controller installed with the Kubeflow installation has Docker set as the default runtime. In order to make your pipelines work, you have to change the value to one of the options
This change has to be made by editing the containerRuntimeExecutor property of the ConfigMap corresponding to the workflow controller. Run the following commands to first know what config map to change and then to edit it to reflect your new value.
kubectl get configmap -n kubeflow
kubectl edit configmap CONFIGMAP_NAME -n kubeflow
# This opens up an editor that can be used to make the change.
If one or more of the deployments are not in the Running state, try increasing the number of nodes in your cluster.
If you're installing Kubeflow Pipelines manually, make sure the Kubernetes service is called exactly ml-pipeline. This is a requirement for ZenML to connect to your Kubeflow Pipelines deployment.
How to use it
To use the Kubeflow orchestrator, we need:
The ZenML kubeflow integration installed. If you haven't done so, run
The local Kubeflow Pipelines deployment requires more than 2 GB of RAM, so if you're using Docker Desktop make sure to update the resource limits in the preferences.
We can then register the orchestrator and use it in our active stack:
zenmlorchestratorregister<NAME> \--flavor=kubeflow# Add the orchestrator to the active stackzenmlstackupdate-o<NAME>
When using the Kubeflow orchestrator with a remote cluster, you'll additionally need
Kubeflow pipelines deployed on a remote cluster. See the deployment section for more information.
The name of your Kubernetes context which points to your remote cluster. Run kubectl config get-contexts to see a list of available contexts.
A remote metadata store as part of your stack. Kubeflow Pipelines already comes with its own MySQL database that is deployed in your Kubernetes cluster. If you want to use this database as your metadata store to get started quickly, check out the corresponding documentation page. For a more production-ready setup we suggest using a MySQL metatadata store instead.
We can then register the orchestrator and use it in our active stack:
zenmlorchestratorregister<NAME> \--flavor=kubeflow \--kubernetes_context=<KUBERNETES_CONTEXT># Add the orchestrator to the active stackzenmlstackupdate-o<NAME>
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Kubeflow. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
Once the orchestrator is part of the active stack, we need to run zenml stack up before running any pipelines. This command
forwards a port so you can view the Kubeflow UI in your browser.
(in the local case) uses K3D to provision a Kubernetes cluster on your machine and deploys Kubeflow Pipelines on it.
You can now run any ZenML pipeline using the Kubeflow orchestrator:
pythonfile_that_runs_a_zenml_pipeline.py
A concrete example of using the Kubeflow orchestrator can be found here.
For more information and a full list of configurable attributes of the Kubeflow orchestrator, check out the API Docs.
Important Note for Multi-Tenancy Deployments
Kubeflow has a notion of multi-tenancy built into its deployment. Kubeflow’s multi-user isolation simplifies user operations because each user only views and edited\s the Kubeflow components and model artifacts defined in their configuration.
Currently, the default ZenML Kubeflow orchestrator yields the following error when running a pipeline:
HTTP response body: {"error":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.","code":3,"message":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by
namespace.","details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid resource references for experiment. ListExperiment requires filtering by namespace.","error_details":"Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace."}]}
The current workaround is as follows. Please place the following code at the top of your runner script (commonly called run.py):
import jsonimport osimport kfpimport os from kubernetes import client as k8s_clientNAMESPACE ="namespace_name"# set thisUSERNAME ="foo"# set thisPASSWORD ="bar"# set thisHOST ="https://qux.com"# set thisKFP_CONFIG ='~/.config/kfp/context.json'# set this manually if you'd likedefget_kfp_token(username:str,password:str) ->str:"""Get token for kubeflow authentication.""" session = requests.Session() response = session.get(HOST) headers ={"Content-Type":"application/x-www-form-urlencoded",} data ={"login": username,"password": password} session.post(response.url, headers=headers, data=data) session_cookie = session.cookies.get_dict()["authservice_session"]return session_cookietoken =get_kfp_token()cookies ='authservice_session='+ token# 1: Set user namespace globallykfp.Client(host=HOST, cookies=cookies).set_user_namespace(NAMESPACE)# 2: Set cookie globally in the kfp config filewithopen(KFP_CONFIG, 'r')as f: data = json.load(f) data['client_authentication_cookie']= cookiesos.remove(KFP_CONFIG)withopen(KFP_CONFIG, 'w')as f: json.dump(data, f)original = KubeflowOrchestrator._configure_container_opdefpatch_container_op(container_op):original(container_op) container_op.container.add_env_variable( k8s_client.V1EnvVar(name="ZENML_RUN_NAME", value="{{workflow.annotations.pipelines.kubeflow.org/run_name}}") )KubeflowOrchestrator._configure_container_op =staticmethod(patch_container_op)defpatch_get_run_name(self,pipeline_name):return os.getenv("ZENML_RUN_NAME")KubeflowEntrypointConfiguration.get_run_name = patch_get_run_name# Continue with your normal pipeline runner code..
Please note that in the above code, HOST should be registered on orchestration registration, with the kubeflow_hostname parameter:
Further note that the above is also currently not tested on all Kubeflow versions, so there might be further bugs with older Kubeflow versions. In this case, please reach out to us on Slack.
In future ZenML versions, multi-tenancy will be natively supported. See this Slack thread for more details
on how the above workaround came to effect.
Please note that the above is all to initialize the kfp.Client() class in the standard orchestrator logic. This code can be seen here.
You can simply override this logic and add your custom authentication scheme if needed. Read here for more details on how to create a custom orchestrator.