Run on GCP
A simple guide to quickly set up a minimal stack on GCP.
This page aims to quickly set up a minimal production stack on GCP. With just a few simple steps you will set up a service account with specifically-scoped permissions that ZenML can use to authenticate with the relevant GCP resources.
Would you like to skip ahead and deploy a full GCP ZenML cloud stack already?
Check out the in-browser stack deployment wizard, the stack registration wizard, or the ZenML GCP Terraform module for a shortcut on how to deploy & register this stack.
While this guide focuses on Google Cloud, we are seeking contributors to create a similar guide for other cloud providers. If you are interested, please create a pull request over on GitHub.
1) Choose a GCP project
In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Make sure a billing account is attached to this project to allow the use of some APIs.
This is how you would do it from the CLI if this is preferred.
If you don't plan to keep the resources that you create in this procedure, create a new project. After you finish these steps, you can delete the project, thereby removing all resources associated with the project.
2) Enable GCloud APIs
The following APIs will need to be enabled within your chosen GCP project.
Cloud Functions API # For the vertex orchestrator
Cloud Run Admin API # For the vertex orchestrator
Cloud Build API # For the container registry
Artifact Registry API # For the container registry
Cloud Logging API # Generally needed
3) Create a dedicated service account
The service account should have these following roles.
AI Platform Service Agent
Storage Object Admin
These roles give permissions for full CRUD on storage objects and full permissions for compute within VertexAI.
4) Create a JSON Key for your service account
This json file will allow the service account to assume the identity of this service account. You will need the filepath of the downloaded file in the next step.
5) Create a Service Connector within ZenML
The service connector will allow ZenML and other ZenML components to authenticate themselves with GCP.
6) Create Stack Components
Artifact Store
Before you run anything within the ZenML CLI, head on over to GCP and create a GCS bucket, in case you don't already have one that you can use. Once this is done, you can create the ZenML stack component as follows:
Head on over to our docs to learn more about artifact stores and how to configure them.
Orchestrator
This guide will use Vertex AI as the orchestrator to run the pipelines. As a serverless service Vertex is a great choice for quick prototyping of your MLOps stack. The orchestrator can be switched out at any point in the future for a more use-case- and budget-appropriate solution.
Head on over to our docs to learn more about orchestrators and how to configure them.
Container Registry
Head on over to our docs to learn more about container registries and how to configure them.
7) Create Stack
In case you want to also add any other stack components to this stack, feel free to do so.
And you're already done!
Just like that, you now have a fully working GCP stack ready to go. Feel free to take it for a spin by running a pipeline on it.
Cleanup
If you do not want to use any of the created resources in the future, simply delete the project you created.
Last updated