LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Overview
  • Integrations
  • Stack Components
    • Orchestrators
      • Local Orchestrator
      • Local Docker Orchestrator
      • Kubeflow Orchestrator
      • Kubernetes Orchestrator
      • Google Cloud VertexAI Orchestrator
      • AWS Sagemaker Orchestrator
      • AzureML Orchestrator
      • Databricks Orchestrator
      • Tekton Orchestrator
      • Airflow Orchestrator
      • Skypilot VM Orchestrator
      • HyperAI Orchestrator
      • Lightning AI Orchestrator
      • Develop a custom orchestrator
    • Artifact Stores
      • Local Artifact Store
      • Amazon Simple Cloud Storage (S3)
      • Google Cloud Storage (GCS)
      • Azure Blob Storage
      • Develop a custom artifact store
    • Container Registries
      • Default Container Registry
      • DockerHub
      • Amazon Elastic Container Registry (ECR)
      • Google Cloud Container Registry
      • Azure Container Registry
      • GitHub Container Registry
      • Develop a custom container registry
    • Step Operators
      • Amazon SageMaker
      • AzureML
      • Google Cloud VertexAI
      • Kubernetes
      • Modal
      • Spark
      • Develop a Custom Step Operator
    • Experiment Trackers
      • Comet
      • MLflow
      • Neptune
      • Weights & Biases
      • Google Cloud VertexAI Experiment Tracker
      • Develop a custom experiment tracker
    • Image Builders
      • Local Image Builder
      • Kaniko Image Builder
      • AWS Image Builder
      • Google Cloud Image Builder
      • Develop a Custom Image Builder
    • Alerters
      • Discord Alerter
      • Slack Alerter
      • Develop a Custom Alerter
    • Annotators
      • Argilla
      • Label Studio
      • Pigeon
      • Prodigy
      • Develop a Custom Annotator
    • Data Validators
      • Great Expectations
      • Deepchecks
      • Evidently
      • Whylogs
      • Develop a custom data validator
    • Feature Stores
      • Feast
      • Develop a Custom Feature Store
    • Model Deployers
      • MLflow
      • Seldon
      • BentoML
      • Hugging Face
      • Databricks
      • vLLM
      • Develop a Custom Model Deployer
    • Model Registries
      • MLflow Model Registry
      • Develop a Custom Model Registry
  • Service Connectors
    • Introduction
    • Complete guide
    • Best practices
    • Connector Types
      • Docker Service Connector
      • Kubernetes Service Connector
      • AWS Service Connector
      • GCP Service Connector
      • Azure Service Connector
      • HyperAI Service Connector
  • Popular Stacks
    • AWS
    • Azure
    • GCP
    • Kubernetes
  • Deployment
    • 1-click Deployment
    • Terraform Modules
    • Register a cloud stack
    • Infrastructure as code
  • Contribute
    • Custom Stack Component
    • Custom Integration
Powered by GitBook
On this page
  • How to use the 1-click deployment tool?
  • What will be deployed?

Was this helpful?

Edit on GitHub
  1. Deployment

1-click Deployment

Deploy a cloud stack from scratch with a single click

PreviousKubernetesNextTerraform Modules

Last updated 26 days ago

Was this helpful?

In ZenML, the is a fundamental concept that represents the configuration of your infrastructure. In a normal workflow, creating a stack requires you to first deploy the necessary pieces of infrastructure and then define them as stack components in ZenML with proper authentication.

Especially in a remote setting, this process can be challenging and time-consuming, and it may create multi-faceted problems. This is why we implemented a feature that allows you to deploy the necessary pieces of infrastructure on your selected cloud provider and get you started on a remote stack with a single click.

If you prefer to have more control over where and how resources are provisioned in your cloud, you can to manage your infrastructure as code yourself.

If you have the required infrastructure pieces already deployed on your cloud, you can also use .

How to use the 1-click deployment tool?

The first thing that you need in order to use this feature is a deployed instance of ZenML (not a local server via zenml login --local). If you do not already have it set up for you, feel free to learn how to do so .

Once you are connected to your deployed ZenML instance, you can use the 1-click deployment tool either through the dashboard or the CLI:

In order to create a remote stack over the dashboard, go to the stacks page on the dashboard and click "+ New Stack".

Since we will be deploying it from scratch, select "New Infrastructure" on the next page:

AWS

If you choose aws as your provider, you will see a page where you will have to select a region and a name for your new stack:

Once the configuration is finished, you will see a deployment page:

Clicking on the "Deploy in AWS" button will redirect you to a Cloud Formation page on AWS Console.

You will have to log in to your AWS account, review and confirm the pre-filled configuration, and create the stack.

GCP

If you choose gcp as your provider, you will see a page where you will have to select a region and a name for your new stack:

Once the configuration is finished, you will see a deployment page:

Make a note of the configuration values provided to you in the ZenML dashboard. You will need these in the next step.

Clicking on the "Deploy in GCP" button will redirect you to a Cloud Shell session on GCP.

After the Cloud Shell session starts, you will be guided through the process of authenticating with GCP, configuring your deployment, and finally provisioning the resources for your new GCP stack using Deployment Manager.

First, you will be asked to create or choose an existing GCP project with billing enabled and to configure your terminal with the selected project:

Next, you will be asked to configure your deployment by pasting the configuration values that were provided to you earlier in the ZenML dashboard. You may need to switch back to the ZenML dashboard to copy these values if you did not do so earlier:

You can take this opportunity to review the script that will be executed at the next step. You will notice that this script starts by enabling some necessary GCP service APIs and configuring some basic permissions for the service accounts involved in the stack deployment, and then deploys the stack using a GCP Deployment Manager template. You can proceed with the deployment by running the script in your terminal:

The script will deploy a GCP Deployment Manager template that provisions the necessary resources for your new GCP stack and automatically registers the stack with your ZenML server. You can monitor the progress of the deployment in your GCP console:

Once the deployment is complete, you may close the Cloud Shell session and return to the ZenML dashboard to view the newly created stack:

Azure

If you choose azure as your provider, you will see a page where you will have to select a location and a name for your new stack:

You will also find a list of resources that will be deployed as part of the stack:

Once the configuration is finished, you will see a deployment page. Make a note of the values in the main.tf file that is provided to you.

Clicking on the "Deploy in Azure" button will redirect you to a Cloud Shell session on Azure.

You should now paste the content of the main.tf file into a file in the Cloud Shell session and run the terraform init --upgrade and terraform apply commands.

Once the Terraform deployment is complete, you may close the Cloud Shell session and return to the ZenML Dashboard to view the newly created stack:

In order to create a remote stack over the CLI, you can use the following command:

zenml stack deploy -p {aws|gcp|azure}

AWS

If you choose aws as your provider, the command will walk you through deploying a Cloud Formation stack on AWS. It will start by showing some information about the stack that will be created:

Upon confirmation, the command will redirect you to a Cloud Formation page on AWS Console where you will have to deploy the stack:

You will have to log in to your AWS account, have permission to deploy an AWS Cloud Formation stack, review and confirm the pre-filled configuration and create the stack.

The Cloud Formation stack will provision the necessary resources for your new AWS stack and automatically register the stack with your ZenML server. You can monitor the progress of the stack in your AWS console:

Once the provisioning is complete, you may close the AWS Cloud Formation page and return to the ZenML CLI to view the newly created stack:

GCP

If you choose gcp as your provider, the command will walk you through deploying a Deployment Manager template on GCP. It will start by showing some information about the stack that will be created:

Upon confirmation, the command will redirect you to a Cloud Shell session on GCP.

After the Cloud Shell session starts, you will be guided through the process of authenticating with GCP, configuring your deployment, and finally provisioning the resources for your new GCP stack using Deployment Manager.

First, you will be asked to create or choose an existing GCP project with billing enabled and to configure your terminal with the selected project:

Next, you will be asked to configure your deployment by pasting the configuration values that were provided to you in the ZenML CLI. You may need to switch back to the ZenML CLI to copy these values if you did not do so earlier:

You can take this opportunity to review the script that will be executed at the next step. You will notice that this script starts by enabling some necessary GCP service APIs and configuring some basic permissions for the service accounts involved in the stack deployment, and then deploys the stack using a GCP Deployment Manager template. You can proceed with the deployment by running the script in your terminal:

The script will deploy a GCP Deployment Manager template that provisions the necessary resources for your new GCP stack and automatically registers the stack with your ZenML server. You can monitor the progress of the deployment in your GCP console:

Once the deployment is complete, you may close the Cloud Shell session and return to the ZenML CLI to view the newly created stack:

Azure

Upon confirmation, the command will redirect you to a Cloud Shell session on Azure.

After the Cloud Shell session starts, you will have to use Terraform to deploy the stack, as instructed by the CLI.

First, you will have to open a file named main.tf in the Cloud Shell session using the editor of your choice (e.g. vim, nano) and paste in the Terraform configuration provided by the CLI. You may need to switch back to the ZenML CLI to copy these values if you did not do so earlier:

You can proceed with the deployment by running the terraform init andterraform apply Terraform commands in your terminal:

Once the Terraform deployment is complete, you may close the Cloud Shell session and return to the ZenML CLI to view the newly created stack:

What will be deployed?

Here is an overview of the infrastructure that the 1-click deployment will prepare for you based on your cloud provider:

Resources

  • An S3 bucket that will be used as a ZenML Artifact Store.

  • An ECR container registry that will be used as a ZenML Container Registry.

  • A CloudBuild project that will be used as a ZenML Image Builder.

  • Permissions to use SageMaker as a ZenML Orchestrator and Step Operator.

  • An IAM user and IAM role with the minimum necessary permissions to access the resources listed above.

  • An AWS access key used to give access to ZenML to connect to the above resources through a ZenML service connector.

Permissions

The configured IAM service account and AWS access key will grant ZenML the following AWS permissions in your AWS account:

  • S3 Bucket:

    • s3:ListBucket

    • s3:GetObject

    • s3:PutObject

    • s3:DeleteObject

    • s3:GetBucketVersioning

    • s3:ListBucketVersions

    • s3:DeleteObjectVersion

  • ECR Repository:

    • ecr:DescribeRepositories

    • ecr:ListRepositories

    • ecr:DescribeRegistry

    • ecr:BatchGetImage

    • ecr:DescribeImages

    • ecr:BatchCheckLayerAvailability

    • ecr:GetDownloadUrlForLayer

    • ecr:InitiateLayerUpload

    • ecr:UploadLayerPart

    • ecr:CompleteLayerUpload

    • ecr:PutImage

    • ecr:GetAuthorizationToken

  • CloudBuild (Client):

    • codebuild:CreateProject

    • codebuild:BatchGetBuilds

  • CloudBuild (Service):

    • s3:GetObject

    • s3:GetObjectVersion

    • logs:CreateLogGroup

    • logs:CreateLogStream

    • logs:PutLogEvents

    • ecr:BatchGetImage

    • ecr:DescribeImages

    • ecr:BatchCheckLayerAvailability

    • ecr:GetDownloadUrlForLayer

    • ecr:InitiateLayerUpload

    • ecr:UploadLayerPart

    • ecr:CompleteLayerUpload

    • ecr:PutImage

    • ecr:GetAuthorizationToken

  • SageMaker (Client):

    • sagemaker:CreatePipeline

    • sagemaker:StartPipelineExecution

    • sagemaker:DescribePipeline

    • sagemaker:DescribePipelineExecution

  • SageMaker (Jobs):

    • AmazonSageMakerFullAccess

Resources

  • A GCS bucket that will be used as a ZenML Artifact Store.

  • A GCP Artifact Registry that will be used as a ZenML Container Registry.

  • Permissions to use Vertex AI as a ZenML Orchestrator and Step Operator.

  • Permissions to use GCP Cloud Builder as a ZenML Image Builder.

  • A GCP Service Account with the minimum necessary permissions to access the resources listed above.

  • An GCP Service Account access key used to give access to ZenML to connect to the above resources through a ZenML service connector.

Permissions

The configured GCP service account and its access key will grant ZenML the following GCP permissions in your GCP project:

  • GCS Bucket:

    • roles/storage.objectUser

  • GCP Artifact Registry:

    • roles/artifactregistry.createOnPushWriter

  • Vertex AI (Client):

    • roles/aiplatform.user

  • Vertex AI (Jobs):

    • roles/aiplatform.serviceAgent

  • Cloud Build (Client):

    • roles/cloudbuild.builds.editor

Resources

  • An Azure Resource Group to contain all the resources required for the ZenML stack

  • An Azure Storage Account and Blob Storage Container that will be used as a ZenML Artifact Store.

  • An Azure Container Registry that will be used as a ZenML Container Registry.

  • An AzureML Workspace that will be used as a ZenML Orchestrator and ZenML Step Operator. A Key Vault and Application Insights instance will also be created in the same Resource Group and used to construct the AzureML Workspace.

  • An Azure Service Principal with the minimum necessary permissions to access the above resources.

  • An Azure Service Principal client secret used to give access to ZenML to connect to the above resources through a ZenML service connector.

Permissions

The configured Azure service principal and its client secret will grant ZenML the following permissions in your Azure subscription:

  • Permissions granted for the created Storage Account:

    • Storage Blob Data Contributor

  • Permissions granted for the created Container Registry:

    • AcrPull

    • AcrPush

    • Contributor

  • Permissions granted for the created AzureML Workspace:

    • AzureML Compute Operator

    • AzureML Data Scientist

There you have it! With a single click, you just deployed a cloud stack, and you can start running your pipelines in a remote setting.

The Cloud Shell session will warn you that the ZenML GitHub repository is untrusted. We recommend that you review and then check the Trust repo checkbox to proceed with the deployment, otherwise, the Cloud Shell session will not be authenticated to access your GCP projects. You will also get a chance to review the scripts that will be executed in the Cloud Shell session before proceeding.

The main.tf file uses the zenml-io/zenml-stack/azure module hosted on the Terraform registry to deploy the necessary resources for your Azure stack and then automatically registers the stack with your ZenML server. You can check out the module documentation .

The Cloud Shell session will warn you that the ZenML GitHub repository is untrusted. We recommend that you review and then check the Trust repo checkbox to proceed with the deployment, otherwise the Cloud Shell session will not be authenticated to access your GCP projects. You will also get a chance to review the scripts that will be executed in the Cloud Shell session before proceeding.

If you choose azure as your provider, the command will walk you through deploying . It will start by showing some information about the stack that will be created:

The Terraform file is a simple configuration that uses to deploy the necessary resources for your Azure stack and then automatically register the stack with your ZenML server. You can read more about the module and its configuration options in the module's documentation.

the contents of the repository
here
the contents of the repository
the ZenML Azure Stack Terraform module
the ZenML Azure Stack Terraform module
stack
use one of our Terraform modules
the stack wizard to seamlessly register your stack
here
ZenML Scarf
The new stacks page
Options for registering a stack
Choosing a cloud provider
Deploy GCP Stack - Step 1
Deploy GCP Stack - Step 1 Continued
GCP Cloud Shell tutorial step 2
Deploy GCP Stack pending
GCP Cloud Shell tutorial step 4
GCP Stack dashboard output
CLI AWS stack deploy
Cloudformation page
Finalizing the new stack
AWS Cloud Formation progress
AWS Stack CLI output
CLI GCP stack deploy
GCP Cloud Shell start page
GCP Cloud Shell intro
GCP Cloud Shell tutorial step 1
GCP Cloud Shell tutorial step 2
GCP Cloud Shell tutorial step 3
GCP Deployment Manager progress
GCP Cloud Shell tutorial step 4
GCP Stack CLI output
CLI Azure stack deploy
Azure Cloud Shell page
Azure Cloud Shell Terraform Configuration File
Azure Cloud Shell Terraform Init
Azure Cloud Shell Terraform Apply
Azure Cloud Shell Terraform Outputs
Azure Stack CLI output