Google Cloud VertexAI Orchestrator

Orchestrating your pipelines to run on Vertex AI.

Vertex AI Pipelines is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.

When to use it

You should use the Vertex orchestrator if:

  • you're already using GCP.

  • you're looking for a proven production-grade orchestrator.

  • you're looking for a UI in which you can track your pipeline runs.

  • you're looking for a managed solution for running your pipelines.

  • you're looking for a serverless solution for running your pipelines.

How to deploy it

Would you like to skip ahead and deploy a full ZenML cloud stack already, including a Vertex AI orchestrator? Check out thein-browser stack deployment wizard, the stack registration wizard, or the ZenML GCP Terraform module for a shortcut on how to deploy & register this stack component.

In order to use a Vertex AI orchestrator, you need to first deploy ZenML to the cloud. It would be recommended to deploy ZenML in the same Google Cloud project as where the Vertex infrastructure is deployed, but it is not necessary to do so. You must ensure that you are connected to the remote ZenML server before using this stack component.

The only other thing necessary to use the ZenML Vertex orchestrator is enabling Vertex-relevant APIs on the Google Cloud project.

How to use it

To use the Vertex orchestrator, we need:

GCP credentials and permissions

This part is without doubt the most involved part of using the Vertex orchestrator. In order to run pipelines on Vertex AI, you need to have a GCP user account and/or one or more GCP service accounts set up with proper permissions, depending on whether you wish to practice the principle of least privilege and distribute permissions across multiple service accounts.

You also have three different options to provide credentials to the orchestrator:

  • use the gcloud CLI to authenticate locally with GCP

  • configure the orchestrator to use a service account key file to authenticate with GCP by setting the service_account_path parameter in the orchestrator configuration.

  • (recommended) configure a GCP Service Connector with GCP credentials and then link the Vertex AI Orchestrator stack component to the Service Connector.

This section explains the different components and GCP resources involved in running a Vertex AI pipeline and what permissions they need, then provides instructions for three different configuration use-cases:

  1. use the local gcloud CLI configured with your GCP user account, including the ability to schedule pipelines

  2. use a GCP Service Connector and a single service account with all permissions, including the ability to schedule pipelines

  3. use a GCP Service Connector and multiple service accounts for different permissions, including the ability to schedule pipelines

Vertex AI pipeline components

To understand what accounts you need to provision and why, let's look at the different components of the Vertex orchestrator:

  1. the ZenML client environment is the environment where you run the ZenML code responsible for building the pipeline Docker image and submitting the pipeline to Vertex AI, among other things. This is usually your local machine or some other environment used to automate running pipelines, like a CI/CD job. This environment needs to be able to authenticate with GCP and needs to have the necessary permissions to create a job in Vertex Pipelines, (e.g. the Vertex AI User role). If you are planning to run pipelines on a schedule, the ZenML client environment also needs additional permissions:

    • the Storage Object Creator Role to be able to write the pipeline JSON file to the artifact store directly (NOTE: not needed if the Artifact Store is configured with credentials or is linked to Service Connector)

  2. the Vertex AI pipeline environment is the GCP environment in which the pipeline steps themselves are running in GCP. The Vertex AI pipeline runs in the context of a GCP service account which we'll call here the workload service account. The workload service account can be explicitly configured in the orchestrator configuration via the workload_service_account parameter. If it is omitted, the orchestrator will use the Compute Engine default service account for the GCP project in which the pipeline is running. This service account needs to have the following permissions:

As you can see, there can be dedicated service accounts involved in running a Vertex AI pipeline. That's two service accounts if you also use a service account to authenticate to GCP in the ZenML client environment. However, you can keep it simple and use the same service account everywhere.

Configuration use-case: local gcloud CLI with user account

This configuration use-case assumes you have configured the gcloud CLI to authenticate locally with your GCP account (i.e. by running gcloud auth login). It also assumes the following:

This is the easiest way to configure the Vertex AI Orchestrator, but it has the following drawbacks:

  • the setup is not portable on other machines and reproducible by other users.

  • it uses the Compute Engine default service account, which is not recommended, given that it has a lot of permissions by default and is used by many other GCP services.

We can then register the orchestrator as follows:

Configuration use-case: GCP Service Connector with single service account

This use-case assumes you have already configured a GCP service account with the following permissions:

It also assumes you have already created a service account key for this service account and downloaded it to your local machine (e.g. in a connectors-vertex-ai.json file). This is not recommended if you are conscious about security. The principle of least privilege is not applied here and the environment in which the pipeline steps are running has many permissions that it doesn't need.

Configuration use-case: GCP Service Connector with different service accounts

This setup applies the principle of least privilege by using different service accounts with the minimum of permissions needed for the different components involved in running a Vertex AI pipeline. It also uses a GCP Service Connector to make the setup portable and reproducible. This configuration is a best-in-class setup that you would normally use in production, but it requires a lot more work to prepare.

This setup involves creating and configuring several GCP service accounts, which is a lot of work and can be error prone. If you don't really need the added security, you can use the GCP Service Connector with a single service account instead.

The following GCP service accounts are needed:

  1. a "client" service account that has the following permissions:

  2. a "workload" service account that has permissions to run a Vertex AI pipeline, (e.g. the Vertex AI Service Agent role).

Alternative: Custom Roles for Maximum Security

For even more granular control, you can create custom roles instead of using the predefined roles:

Client Service Account Custom Permissions:

  • aiplatform.pipelineJobs.create

  • aiplatform.pipelineJobs.get

  • aiplatform.pipelineJobs.list

  • cloudfunctions.functions.create

  • storage.objects.create (for artifact store access)

Workload Service Account Custom Permissions:

  • aiplatform.customJobs.create

  • aiplatform.customJobs.get

  • aiplatform.customJobs.list

  • storage.objects.get

  • storage.objects.create

This provides the absolute minimum permissions required for Vertex AI pipeline operations.

A key is also needed for the "client" service account. You can create a key for this service account and download it to your local machine (e.g. in a connectors-vertex-ai-client.json file).

With all the service accounts and the key ready, we can register the GCP Service Connector and Vertex AI orchestrator as follows:

Configuring the stack

With the orchestrator registered, we can use it in our active stack:

ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Vertex AI. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.

You can now run any ZenML pipeline using the Vertex orchestrator:

Vertex UI

Vertex comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps.

Vertex UI

For any runs executed on Vertex, you can get the URL to the Vertex UI in Python using the following code snippet:

Run pipelines on a schedule

The Vertex Pipelines orchestrator supports running pipelines on a schedule using its native scheduling capability.

How to schedule a pipeline

The start_time and end_time timestamp parameters are both optional and are to be specified in local time. They define the time window in which the pipeline runs will be triggered. If they are not specified, the pipeline will run indefinitely.

The cron_expression parameter supports timezones. For example, the expression TZ=Europe/Paris 0 10 * * * will trigger runs at 10:00 in the Europe/Paris timezone.

How to update/delete a scheduled pipeline

Note that ZenML only gets involved to schedule a run, but maintaining the lifecycle of the schedule is the responsibility of the user.

In order to cancel a scheduled Vertex pipeline, you need to manually delete the schedule in VertexAI (via the UI or the CLI). Here is an example (WARNING: Will delete all schedules if you run this):

Additional configuration

For additional configuration of the Vertex orchestrator, you can pass VertexOrchestratorSettings which allows you to configure labels for your Vertex Pipeline jobs or specify which GPU to use.

If your pipelines steps have certain hardware requirements, you can specify them as ResourceSettings:

To run your pipeline (or some steps of it) on a GPU, you will need to set both a node selector and the GPU count as follows:

You can find available accelerator types here.

Using Custom Job Parameters

For more advanced hardware configuration, you can use VertexCustomJobParameters to customize each step's execution environment. This allows you to specify detailed requirements like boot disk size, accelerator type, machine type, and more without needing a separate step operator.

You can also specify these parameters at pipeline level to apply them to all steps:

The VertexCustomJobParameters supports the following common configuration options:

Parameter
Description

boot_disk_size_gb

Size of the boot disk in GB (default: 100)

boot_disk_type

Type of disk ("pd-standard", "pd-ssd", etc.)

machine_type

Machine type for computation (e.g., "n1-standard-4")

accelerator_type

Type of accelerator (e.g., "NVIDIA_TESLA_T4", "NVIDIA_TESLA_A100")

accelerator_count

Number of accelerators to attach

service_account

Service account to use for the job

persistent_resource_id

ID of persistent resource for faster job startup

Advanced Custom Job Parameters

For advanced scenarios, you can use additional_training_job_args to pass additional parameters directly to the underlying Google Cloud Pipeline Components library:

These advanced parameters are passed directly to the Google Cloud Pipeline Components library's create_custom_training_job_from_component function. This approach lets you access new features of the Google API without requiring ZenML updates.

When using custom_job_parameters, ZenML automatically applies certain configurations from your orchestrator:

  • Network Configuration: If you've set network in your Vertex orchestrator configuration, it will be automatically applied to all custom jobs unless you explicitly override it in additional_training_job_args.

  • Encryption Specification: If you've set encryption_spec_key_name in your orchestrator configuration, it will be applied to custom jobs for consistent encryption.

  • Service Account: For non-persistent resource jobs, if no service account is specified in the custom job parameters, the workload_service_account from the orchestrator configuration will be used.

This inheritance mechanism ensures consistent configuration across your pipeline steps, maintaining connectivity to GCP resources (like databases), security settings, and compute resources without requiring manual specification for each step.

For a complete list of parameters supported by the underlying function, refer to the Google Pipeline Components SDK V1 docs.

Note that when using custom job parameters with persistent_resource_id, you must always specify a service_account as well.

The additional_training_job_args field provides future-proofing for your ZenML pipelines. If Google adds new parameters to their API, you can immediately use them without waiting for ZenML updates. This is especially useful for accessing new hardware configurations, networking features, or security settings as they become available.

Enabling CUDA for GPU-backed hardware

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.

Using Persistent Resources for Faster Development

When developing ML pipelines that use Vertex AI, the startup time for each step can be significant since Vertex needs to provision new compute resources for each run. To speed up development iterations, you can use Vertex AI's Persistent Resources feature, which keeps compute resources warm between runs.

To use persistent resources with the Vertex orchestrator, you first need to create a persistent resource using the GCP Cloud UI, or by following instructions in the GCP docs. Next, you'll need to configure your orchestrator to run on the persistent resource. This can be done either through the dashboard or CLI in which case it applies to all pipelines that will be run using this orchestrator, or dynamically in code for a specific pipeline or even just single steps.

Configure the orchestrator using the CLI

Configure the orchestrator using the dashboard

Navigate to the Stacks section in your ZenML dashboard and either create a new Vertex orchestrator or update an existing one. During the creation/update, set the persistent resource ID and other values in the custom_job_parameters attribute.

Configure the orchestrator dynamically in code

If you need to explicitly specify that no persistent resource should be used, set persistent_resource_id to an empty string:

Using a persistent resource is particularly useful when you're developing locally and want to iterate quickly on steps that need cloud resources. The startup time of the job can be extremely quick.

Last updated

Was this helpful?