LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Overview
  • Integrations
  • Stack Components
    • Orchestrators
      • Local Orchestrator
      • Local Docker Orchestrator
      • Kubeflow Orchestrator
      • Kubernetes Orchestrator
      • Google Cloud VertexAI Orchestrator
      • AWS Sagemaker Orchestrator
      • AzureML Orchestrator
      • Databricks Orchestrator
      • Tekton Orchestrator
      • Airflow Orchestrator
      • Skypilot VM Orchestrator
      • HyperAI Orchestrator
      • Lightning AI Orchestrator
      • Develop a custom orchestrator
    • Artifact Stores
      • Local Artifact Store
      • Amazon Simple Cloud Storage (S3)
      • Google Cloud Storage (GCS)
      • Azure Blob Storage
      • Develop a custom artifact store
    • Container Registries
      • Default Container Registry
      • DockerHub
      • Amazon Elastic Container Registry (ECR)
      • Google Cloud Container Registry
      • Azure Container Registry
      • GitHub Container Registry
      • Develop a custom container registry
    • Step Operators
      • Amazon SageMaker
      • AzureML
      • Google Cloud VertexAI
      • Kubernetes
      • Modal
      • Spark
      • Develop a Custom Step Operator
    • Experiment Trackers
      • Comet
      • MLflow
      • Neptune
      • Weights & Biases
      • Google Cloud VertexAI Experiment Tracker
      • Develop a custom experiment tracker
    • Image Builders
      • Local Image Builder
      • Kaniko Image Builder
      • AWS Image Builder
      • Google Cloud Image Builder
      • Develop a Custom Image Builder
    • Alerters
      • Discord Alerter
      • Slack Alerter
      • Develop a Custom Alerter
    • Annotators
      • Argilla
      • Label Studio
      • Pigeon
      • Prodigy
      • Develop a Custom Annotator
    • Data Validators
      • Great Expectations
      • Deepchecks
      • Evidently
      • Whylogs
      • Develop a custom data validator
    • Feature Stores
      • Feast
      • Develop a Custom Feature Store
    • Model Deployers
      • MLflow
      • Seldon
      • BentoML
      • Hugging Face
      • Databricks
      • vLLM
      • Develop a Custom Model Deployer
    • Model Registries
      • MLflow Model Registry
      • Develop a Custom Model Registry
  • Service Connectors
    • Introduction
    • Complete guide
    • Best practices
    • Connector Types
      • Docker Service Connector
      • Kubernetes Service Connector
      • AWS Service Connector
      • GCP Service Connector
      • Azure Service Connector
      • HyperAI Service Connector
  • Popular Stacks
    • AWS
    • Azure
    • GCP
    • Kubernetes
  • Deployment
    • 1-click Deployment
    • Terraform Modules
    • Register a cloud stack
    • Infrastructure as code
  • Contribute
    • Custom Stack Component
    • Custom Integration
Powered by GitBook
On this page
  • When to use it
  • How to deploy it
  • How to use it
  • GCP credentials and permissions
  • Configuring the stack
  • Vertex UI
  • Run pipelines on a schedule
  • Additional configuration
  • Using Custom Job Parameters
  • Enabling CUDA for GPU-backed hardware
  • Using Persistent Resources for Faster Development

Was this helpful?

  1. Stack Components
  2. Orchestrators

Google Cloud VertexAI Orchestrator

Orchestrating your pipelines to run on Vertex AI.

PreviousKubernetes OrchestratorNextAWS Sagemaker Orchestrator

Last updated 21 days ago

Was this helpful?

is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.

This component is only meant to be used within the context of a . Usage with a local ZenML deployment may lead to unexpected behavior!

When to use it

You should use the Vertex orchestrator if:

  • you're already using GCP.

  • you're looking for a proven production-grade orchestrator.

  • you're looking for a UI in which you can track your pipeline runs.

  • you're looking for a managed solution for running your pipelines.

  • you're looking for a serverless solution for running your pipelines.

How to deploy it

Would you like to skip ahead and deploy a full ZenML cloud stack already, including a Vertex AI orchestrator? Check out the, the , or for a shortcut on how to deploy & register this stack component.

In order to use a Vertex AI orchestrator, you need to first deploy . It would be recommended to deploy ZenML in the same Google Cloud project as where the Vertex infrastructure is deployed, but it is not necessary to do so. You must ensure that you are connected to the remote ZenML server before using this stack component.

The only other thing necessary to use the ZenML Vertex orchestrator is enabling Vertex-relevant APIs on the Google Cloud project.

How to use it

To use the Vertex orchestrator, we need:

  • The ZenML gcp integration installed. If you haven't done so, run

    zenml integration install gcp
  • The GCP project ID and location in which you want to run your Vertex AI pipelines.

GCP credentials and permissions

You also have three different options to provide credentials to the orchestrator:

Vertex AI pipeline components

To understand what accounts you need to provision and why, let's look at the different components of the Vertex orchestrator:

As you can see, there can be dedicated service accounts involved in running a Vertex AI pipeline. That's two service accounts if you also use a service account to authenticate to GCP in the ZenML client environment. However, you can keep it simple and use the same service account everywhere.

Configuration use-case: local gcloud CLI with user account

This is the easiest way to configure the Vertex AI Orchestrator, but it has the following drawbacks:

  • the setup is not portable on other machines and reproducible by other users.

  • it uses the Compute Engine default service account, which is not recommended, given that it has a lot of permissions by default and is used by many other GCP services.

We can then register the orchestrator as follows:

zenml orchestrator register <ORCHESTRATOR_NAME> \
    --flavor=vertex \
    --project=<PROJECT_ID> \
    --location=<GCP_LOCATION> \
    --synchronous=true

Configuration use-case: GCP Service Connector with single service account

This use-case assumes you have already configured a GCP service account with the following permissions:

It also assumes you have already created a service account key for this service account and downloaded it to your local machine (e.g. in a connectors-vertex-ai-workload.json file). This is not recommended if you are conscious about security. The principle of least privilege is not applied here and the environment in which the pipeline steps are running has many permissions that it doesn't need.

zenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic

zenml orchestrator register <ORCHESTRATOR_NAME> \
    --flavor=vertex \
    --location=<GCP_LOCATION> \
    --synchronous=true \
    --workload_service_account=<SERVICE_ACCOUNT_NAME>@<PROJECT_NAME>.iam.gserviceaccount.com 
    
zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME>

Configuration use-case: GCP Service Connector with different service accounts

The following GCP service accounts are needed:

  1. a "client" service account that has the following permissions:

A key is also needed for the "client" service account. You can create a key for this service account and download it to your local machine (e.g. in a connectors-vertex-ai-workload.json file).

zenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic

zenml orchestrator register <ORCHESTRATOR_NAME> \
    --flavor=vertex \
    --location=<GCP_LOCATION> \
    --synchronous=true \
    --workload_service_account=<WORKLOAD_SERVICE_ACCOUNT_NAME>@<PROJECT_NAME>.iam.gserviceaccount.com 

zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME>

Configuring the stack

With the orchestrator registered, we can use it in our active stack:

# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set

You can now run any ZenML pipeline using the Vertex orchestrator:

python file_that_runs_a_zenml_pipeline.py

Vertex UI

Vertex comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps.

For any runs executed on Vertex, you can get the URL to the Vertex UI in Python using the following code snippet:

from zenml.client import Client

pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>")
orchestrator_url = pipeline_run.run_metadata["orchestrator_url"]

Run pipelines on a schedule

How to schedule a pipeline

from datetime import datetime, timedelta

from zenml import pipeline
from zenml.config.schedule import Schedule

@pipeline
def first_pipeline():
    ...

# Run a pipeline every 5th minute
first_pipeline = first_pipeline.with_options(
    schedule=Schedule(
        cron_expression="*/5 * * * *"
    )
)
first_pipeline()

@pipeline
def second_pipeline():
    ...

# Run a pipeline every hour
# starting in one day from now and ending in three days from now
second_pipeline = second_pipeline.with_options(
    schedule=Schedule(
        cron_expression="0 * * * *",
        start_time=datetime.now() + timedelta(days=1),
        end_time=datetime.now() + timedelta(days=3),
    )
)
second_pipeline()

The Vertex orchestrator only supports the cron_expression, start_time (optional) and end_time (optional) parameters in the Schedule object, and will ignore all other parameters supplied to define the schedule.

The start_time and end_time timestamp parameters are both optional and are to be specified in local time. They define the time window in which the pipeline runs will be triggered. If they are not specified, the pipeline will run indefinitely.

How to update/delete a scheduled pipeline

Note that ZenML only gets involved to schedule a run, but maintaining the lifecycle of the schedule is the responsibility of the user.

In order to cancel a scheduled Vertex pipeline, you need to manually delete the schedule in VertexAI (via the UI or the CLI). Here is an example (WARNING: Will delete all schedules if you run this):

from google.cloud import aiplatform
from zenml.client import Client

def delete_all_schedules():
    # Initialize ZenML client
    zenml_client = Client()
    # Get all ZenML schedules
    zenml_schedules = zenml_client.list_schedules()
    
    if not zenml_schedules:
        print("No ZenML schedules to delete.")
        return
    
    print(f"\nFound {len(zenml_schedules)} ZenML schedules to process...\n")
    
    # Process each ZenML schedule
    for zenml_schedule in zenml_schedules:
        schedule_name = zenml_schedule.name
        print(f"Processing ZenML schedule: {schedule_name}")
        
        try:
            # First delete the corresponding Vertex AI schedule
            vertex_filter = f'display_name="{schedule_name}"'
            vertex_schedules = aiplatform.PipelineJobSchedule.list(
                filter=vertex_filter,
                order_by='create_time desc',
                location='europe-west1'
            )
            
            if vertex_schedules:
                print(f"  Found {len(vertex_schedules)} matching Vertex schedules")
                for vertex_schedule in vertex_schedules:
                    try:
                        vertex_schedule.delete()
                        print(f"  ✓ Deleted Vertex schedule: {vertex_schedule.display_name}")
                    except Exception as e:
                        print(f"  ✗ Failed to delete Vertex schedule {vertex_schedule.display_name}: {e}")
            else:
                print(f"  No matching Vertex schedules found for {schedule_name}")
            
            # Then delete the ZenML schedule
            zenml_client.delete_schedule(zenml_schedule.id)
            print(f"  ✓ Deleted ZenML schedule: {schedule_name}")
            
        except Exception as e:
            print(f"  ✗ Failed to process {schedule_name}: {e}")
    
    print("\nSchedule cleanup completed!")

if __name__ == "__main__":
    delete_all_schedules()

Additional configuration

For additional configuration of the Vertex orchestrator, you can pass VertexOrchestratorSettings which allows you to configure labels for your Vertex Pipeline jobs or specify which GPU to use.

from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import (
   VertexOrchestratorSettings
)

vertex_settings = VertexOrchestratorSettings(labels={"key": "value"})

If your pipelines steps have certain hardware requirements, you can specify them as ResourceSettings:

from zenml.config import ResourceSettings

resource_settings = ResourceSettings(cpu_count=8, memory="16GB")

To run your pipeline (or some steps of it) on a GPU, you will need to set both a node selector and the GPU count as follows:

from zenml import step, pipeline

from zenml.config import ResourceSettings
from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import (
    VertexOrchestratorSettings
)

vertex_settings = VertexOrchestratorSettings(
    pod_settings={
        "node_selectors": {
            "cloud.google.com/gke-accelerator": "NVIDIA_TESLA_A100"
        },
    }
)
resource_settings = ResourceSettings(gpu_count=1)

# Either specify settings on step-level
@step(
    settings={
        "orchestrator": vertex_settings,
        "resources": resource_settings,
    }
)
def my_step():
    ...

# OR specify on pipeline-level
@pipeline(
    settings={
        "orchestrator": vertex_settings,
        "resources": resource_settings,
    }
)
def my_pipeline():
    ...

Using Custom Job Parameters

For more advanced hardware configuration, you can use VertexCustomJobParameters to customize each step's execution environment. This allows you to specify detailed requirements like boot disk size, accelerator type, machine type, and more without needing a separate step operator.

from zenml.integrations.gcp.vertex_custom_job_parameters import (
    VertexCustomJobParameters,
)
from zenml import step, pipeline
from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import (
    VertexOrchestratorSettings
)

# Create settings with a larger boot disk (1TB)
large_disk_settings = VertexOrchestratorSettings(
    custom_job_parameters=VertexCustomJobParameters(
        boot_disk_size_gb=1000,  # 1TB disk
        boot_disk_type="pd-standard",  # Standard persistent disk (cheaper)
        machine_type="n1-standard-8"
    )
)

# Create settings with GPU acceleration
gpu_settings = VertexOrchestratorSettings(
    custom_job_parameters=VertexCustomJobParameters(
        accelerator_type="NVIDIA_TESLA_A100",
        accelerator_count=1,
        machine_type="n1-standard-8",
        boot_disk_size_gb=200  # Larger disk for GPU workloads
    )
)

# Step that needs a large disk but no GPU
@step(settings={"orchestrator": large_disk_settings})
def data_processing_step():
    # Process large datasets that require a lot of disk space
    ...

# Step that needs GPU acceleration
@step(settings={"orchestrator": gpu_settings})
def training_step():
    # Train ML model using GPU
    ...

# Define pipeline that uses both steps
@pipeline()
def my_pipeline():
    data = data_processing_step()
    model = training_step(data)
    ...

You can also specify these parameters at pipeline level to apply them to all steps:

@pipeline(
    settings={
        "orchestrator": VertexOrchestratorSettings(
            custom_job_parameters=VertexCustomJobParameters(
                boot_disk_size_gb=500,  # 500GB disk for all steps
                machine_type="n1-standard-4"
            )
        )
    }
)
def my_pipeline():
    ...

The VertexCustomJobParameters supports the following common configuration options:

Parameter
Description

boot_disk_size_gb

Size of the boot disk in GB (default: 100)

boot_disk_type

Type of disk ("pd-standard", "pd-ssd", etc.)

machine_type

Machine type for computation (e.g., "n1-standard-4")

accelerator_type

Type of accelerator (e.g., "NVIDIA_TESLA_T4", "NVIDIA_TESLA_A100")

accelerator_count

Number of accelerators to attach

service_account

Service account to use for the job

persistent_resource_id

ID of persistent resource for faster job startup

Advanced Custom Job Parameters

For advanced scenarios, you can use additional_training_job_args to pass additional parameters directly to the underlying Google Cloud Pipeline Components library:

@step(
    settings={
        "orchestrator": VertexOrchestratorSettings(
            custom_job_parameters=VertexCustomJobParameters(
                machine_type="n1-standard-8",
                # Advanced parameters passed directly to create_custom_training_job_from_component
                additional_training_job_args={
                    "timeout": "86400s",  # 24 hour timeout
                    "network": "projects/12345/global/networks/my-vpc",
                    "enable_web_access": True,
                    "reserved_ip_ranges": ["192.168.0.0/16"],
                    "base_output_directory": "gs://my-bucket/outputs",
                    "labels": {"team": "ml-research", "project": "image-classification"}
                }
            )
        )
    }
)
def my_advanced_step():
    ...

If you specify parameters in additional_training_job_args that are also defined as explicit attributes (like machine_type or boot_disk_size_gb), the values in additional_training_job_args will override the explicit values. For example:

VertexCustomJobParameters(
    machine_type="n1-standard-4",  # This will be overridden
    additional_training_job_args={
        "machine_type": "n1-standard-16"  # This takes precedence
    }
)

The resulting machine type will be "n1-standard-16". When this happens, ZenML will log a warning at runtime to alert you of the parameter override, which helps avoid confusion about which configuration values are actually being used.

When using custom_job_parameters, ZenML automatically applies certain configurations from your orchestrator:

  • Network Configuration: If you've set network in your Vertex orchestrator configuration, it will be automatically applied to all custom jobs unless you explicitly override it in additional_training_job_args.

  • Encryption Specification: If you've set encryption_spec_key_name in your orchestrator configuration, it will be applied to custom jobs for consistent encryption.

  • Service Account: For non-persistent resource jobs, if no service account is specified in the custom job parameters, the workload_service_account from the orchestrator configuration will be used.

This inheritance mechanism ensures consistent configuration across your pipeline steps, maintaining connectivity to GCP resources (like databases), security settings, and compute resources without requiring manual specification for each step.

Note that when using custom job parameters with persistent_resource_id, you must always specify a service_account as well.

The additional_training_job_args field provides future-proofing for your ZenML pipelines. If Google adds new parameters to their API, you can immediately use them without waiting for ZenML updates. This is especially useful for accessing new hardware configurations, networking features, or security settings as they become available.

Enabling CUDA for GPU-backed hardware

Using Persistent Resources for Faster Development

Note that a service account with permissions to access the persistent resource is mandatory, so make sure to always include it in the configuration:

Configure the orchestrator using the CLI

# You can also use `zenml orchestrator update`
zenml orchestrator register <NAME> -f vertex --custom_job_parameters='{"persistent_resource_id": "<PERSISTENT_RESOURCE_ID>", "service_account": "<SERVICE_ACCOUNT_NAME>", "machine_type": "n1-standard-4", "boot_disk_type": "pd-standard"}'

Configure the orchestrator using the dashboard

Navigate to the Stacks section in your ZenML dashboard and either create a new Vertex orchestrator or update an existing one. During the creation/update, set the persistent resource ID and other values in the custom_job_parameters attribute.

Configure the orchestrator dynamically in code

from zenml.integrations.gcp.vertex_custom_job_parameters import (
    VertexCustomJobParameters,
)
from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import (
    VertexOrchestratorSettings
)

# Configure for the pipeline which applies to all steps
@pipeline(
    settings={
        "orchestrator": VertexOrchestratorSettings(
            custom_job_parameters=VertexCustomJobParameters(
                persistent_resource_id="<PERSISTENT_RESOURCE_ID>",
                service_account="<SERVICE_ACCOUNT_NAME>",
                machine_type="n1-standard-4",
                boot_disk_type="pd-standard"
            )
        )
    }
)
def my_pipeline():
    ...


# Configure for a single step
@step(
    settings={
        "orchestrator": VertexOrchestratorSettings(
            custom_job_parameters=VertexCustomJobParameters(
                persistent_resource_id="<PERSISTENT_RESOURCE_ID>",
                service_account="<SERVICE_ACCOUNT_NAME>",
                machine_type="n1-standard-4",
                boot_disk_type="pd-standard"
            )
        )
    }
)
def my_step():
    ...

If you need to explicitly specify that no persistent resource should be used, set persistent_resource_id to an empty string:

@step(
    settings={
        "orchestrator": VertexOrchestratorSettings(
            custom_job_parameters=VertexCustomJobParameters(
                persistent_resource_id="",  # Explicitly not using a persistent resource
                boot_disk_size_gb=1000,  # Set a large disk
                machine_type="n1-standard-8"
            )
        )
    }
)
def my_step():
    ...

Using a persistent resource is particularly useful when you're developing locally and want to iterate quickly on steps that need cloud resources. The startup time of the job can be extremely quick.

When using persistent resources (persistent_resource_id specified), you must always include a service_account. Conversely, when explicitly setting persistent_resource_id="" to avoid using persistent resources, ZenML will automatically set the service account to an empty string to avoid Vertex API errors - so don't set the service account in this case.

Remember that persistent resources continue to incur costs as long as they're running, even when idle. Make sure to monitor your usage and configure appropriate idle timeout periods.

installed and running.

A as part of your stack.

A as part of your stack.

This part is without doubt the most involved part of using the Vertex orchestrator. In order to run pipelines on Vertex AI, you need to have a GCP user account and/or one or more GCP service accounts set up with proper permissions, depending on whether you wish to practice and distribute permissions across multiple service accounts.

use the to authenticate locally with GCP

configure the orchestrator to use a to authenticate with GCP by setting the service_account_path parameter in the orchestrator configuration.

(recommended) configure with GCP credentials and then link the Vertex AI Orchestrator stack component to the Service Connector.

This section involved in running a Vertex AI pipeline and what permissions they need, then provides instructions for three different configuration use-cases:

, including the ability to schedule pipelines

with all permissions, including the ability to schedule pipelines

for different permissions, including the ability to schedule pipelines

the ZenML client environment is the environment where you run the ZenML code responsible for building the pipeline Docker image and submitting the pipeline to Vertex AI, among other things. This is usually your local machine or some other environment used to automate running pipelines, like a CI/CD job. This environment needs to be able to authenticate with GCP and needs to have the necessary permissions to create a job in Vertex Pipelines, (e.g. ). If you are planning to , the ZenML client environment also needs additional permissions:

the to be able to write the pipeline JSON file to the artifact store directly (NOTE: not needed if the Artifact Store is configured with credentials or is linked to Service Connector)

the Vertex AI pipeline environment is the GCP environment in which the pipeline steps themselves are running in GCP. The Vertex AI pipeline runs in the context of a GCP service account which we'll call here the workload service account. The workload service account can be explicitly configured in the orchestrator configuration via the workload_service_account parameter. If it is omitted, the orchestrator will use for the GCP project in which the pipeline is running. This service account needs to have the following permissions:

permissions to run a Vertex AI pipeline, (e.g. ).

This configuration use-case assumes you have configured the to authenticate locally with your GCP account (i.e. by running gcloud auth login). It also assumes the following:

your GCP account has permissions to create a job in Vertex Pipelines, (e.g. ).

for the GCP project in which the pipeline is running is updated with additional permissions required to run a Vertex AI pipeline, (e.g. ).

permissions to create a job in Vertex Pipelines, (e.g. ).

permissions to run a Vertex AI pipeline, (e.g. ).

the to be able to write the pipeline JSON file to the artifact store directly.

This setup applies the principle of least privilege by using different service accounts with the minimum of permissions needed for . It also uses a GCP Service Connector to make the setup portable and reproducible. This configuration is a best-in-class setup that you would normally use in production, but it requires a lot more work to prepare.

This setup involves creating and configuring several GCP service accounts, which is a lot of work and can be error prone. If you don't really need the added security, you can use instead.

permissions to create a job in Vertex Pipelines, (e.g. ).

permissions to create a Google Cloud Function (e.g. with the ).

the to be able to write the pipeline JSON file to the artifact store directly (NOTE: not needed if the Artifact Store is configured with credentials or is linked to Service Connector).

a "workload" service account that has permissions to run a Vertex AI pipeline, (e.g. ).

With all the service accounts and the key ready, we can register and Vertex AI orchestrator as follows:

ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your pipeline steps in Vertex AI. Check out if you want to learn more about how ZenML builds these images and how you can customize them.

The Vertex Pipelines orchestrator supports running pipelines on a schedule using its .

The cron_expression parameter . For example, the expression TZ=Europe/Paris 0 10 * * * will trigger runs at 10:00 in the Europe/Paris timezone.

You can find available accelerator types .

These advanced parameters are passed directly to the Google Cloud Pipeline Components library's function. This approach lets you access new features of the Google API without requiring ZenML updates.

For a complete list of parameters supported by the underlying function, refer to the .

Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.

When developing ML pipelines that use Vertex AI, the startup time for each step can be significant since Vertex needs to provision new compute resources for each run. To speed up development iterations, you can use Vertex AI's feature, which keeps compute resources warm between runs.

To use persistent resources with the Vertex orchestrator, you first need to create a persistent resource using the GCP Cloud UI, or by . Next, you'll need to configure your orchestrator to run on the persistent resource. This can be done either through the dashboard or CLI in which case it applies to all pipelines that will be run using this orchestrator, or dynamically in code for a specific pipeline or even just single steps.

Vertex AI Pipelines
remote ZenML deployment scenario
in-browser stack deployment wizard
stack registration wizard
the ZenML GCP Terraform module
ZenML to the cloud
Docker
remote artifact store
remote container registry
the principle of least privilege
gcloud CLI
service account key file
a GCP Service Connector
Storage Object Creator Role
the Compute Engine default service account
the Vertex AI Service Agent role
gcloud CLI
the Vertex AI User role
the Compute Engine default service account
the Vertex AI Service Agent role
the Vertex AI User role
the Vertex AI Service Agent role
Storage Object Creator Role
the Vertex AI User role
Cloud Functions Developer Role
Storage Object Creator Role
the Vertex AI Service Agent role
the GCP Service Connector
this page
native scheduling capability
supports timezones
here
create_custom_training_job_from_component
Google Pipeline Components SDK V1 docs
the instructions on this page
Persistent Resources
following instructions in the GCP docs
GCP credentials with proper permissions
explains the different components and GCP resources
use the local gcloud CLI configured with your GCP user account
use a GCP Service Connector and a single service account
use a GCP Service Connector and multiple service accounts
the Vertex AI User role
run pipelines on a schedule
the different components involved in running a Vertex AI pipeline
the GCP Service Connector with a single service account
Vertex UI