Google Cloud VertexAI Orchestrator
How to orchestrate pipelines with Vertex AI
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
The Vertex orchestrator is an orchestrator flavor provided with the ZenML gcp
integration that uses Vertex AI to run your pipelines.
When to use it
You should use the Vertex orchestrator if:
you're already using GCP.
you're looking for a proven production-grade orchestrator.
you're looking for a UI in which you can track your pipeline runs.
you're looking for a managed solution for running your pipelines.
you're looking for a serverless solution for running your pipelines.
How to deploy it
Check out our cloud guide ZenML Cloud Guide for information on how to set up the Vertex orchestrator.
How to use it
To use the Vertex orchestrator, we need:
The ZenML
gcp
integration installed. If you haven't done so, runDocker installed and running.
kubectl installed.
A remote artifact store as part of your stack.
A remote metadata store as part of your stack.
A remote container registry as part of your stack.
The GCP project ID and location in which you want to run your Vertex AI pipelines.
We can then register the orchestrator and use it in our active stack:
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>
which includes your code and use it to run your pipeline steps in Vertex AI. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
You can now run any ZenML pipeline using the Vertex orchestrator:
A concrete example of using the Vertex orchestrator can be found here.
For more information and a full list of configurable attributes of the Vertex orchestrator, check out the API Docs.
Last updated