Google Cloud VertexAI Orchestrator

How to orchestrate pipelines with Vertex AI

This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.

The Vertex orchestrator is an orchestrator flavor provided with the ZenML gcp integration that uses Vertex AI to run your pipelines.

When to use it

You should use the Vertex orchestrator if:

  • you're already using GCP.

  • you're looking for a proven production-grade orchestrator.

  • you're looking for a UI in which you can track your pipeline runs.

  • you're looking for a managed solution for running your pipelines.

  • you're looking for a serverless solution for running your pipelines.

How to deploy it

Check out our cloud guide ZenML Cloud Guide for information on how to set up the Vertex orchestrator.

How to use it

To use the Vertex orchestrator, we need:

We can then register the orchestrator and use it in our active stack:

zenml orchestrator register <NAME> \
    --flavor=vertex \
    --project=<PROJECT_ID> \
    --location=<GCP_LOCATION>

# Add the orchestrator to the active stack
zenml stack update -o <NAME>

ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Vertex AI. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.

You can now run any ZenML pipeline using the Vertex orchestrator:

python file_that_runs_a_zenml_pipeline.py

A concrete example of using the Vertex orchestrator can be found here.

For more information and a full list of configurable attributes of the Vertex orchestrator, check out the API Docs.

Last updated