Google Cloud VertexAI Orchestrator
How to orchestrate pipelines with Vertex AI
You should use the Vertex orchestrator if:
- you're already using GCP.
- you're looking for a proven production-grade orchestrator.
- you're looking for a UI in which you can track your pipeline runs.
- you're looking for a managed solution for running your pipelines.
- you're looking for a serverless solution for running your pipelines.
In order to use a Vertex AI orchestrator, you need to first deploy ZenML to the cloud. It would be recommended to deploy ZenML in the same Google Cloud project as where the Vertex infrastructure is deployed, but it is not necessary to do so. You must ensure that you are connected to the remote ZenML server before using this stack component.
The only other thing necessary to use the ZenML Vertex orchestrator is enabling Vertex relevant APIs on the Google Cloud project.
In order to quickly enable APIs, and create other resources necessary for to use this integration, you can also consider using the Vertex AI stack recipe, which helps you set up the infrastructure with one click.
To use the Vertex orchestrator, we need:
- The ZenML
gcpintegration installed. If you haven't done so, runzenml integration install gcp
- The GCP project ID and location in which you want to run your Vertex AI pipelines.
We can then register the orchestrator and use it in our active stack:
zenml orchestrator register <ORCHESTRATOR_NAME> \
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
You can now run any ZenML pipeline using the Vertex orchestrator:
For additional configuration of the Vertex orchestrator, you can pass
VertexOrchestratorSettingswhich allows you to configure (among others) the following attributes:
pod_settings: Node selectors, affinity and tolerations to apply to the Kubernetes Pods running your pipline. These can be either specified using the Kubernetes model objects or as dictionaries.
from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings
from kubernetes.client.models import V1Toleration
vertex_settings = VertexOrchestratorSettings(
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.