Deploy a stack component

Individually deploying different stack components.

This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.

Deploy a stack component

If you have used ZenML before, you must be familiar with the flow of registering new stack components. It goes something like this:

zenml artifact-store register my_store --flavor=s3 --path=s3://my_bucket

Commands like these assume that you already have the stack component deployed. In this case, it would mean that you must already have a bucket called my_bucket on AWS S3 to be able to use this component.

We took inspiration from this design to build something that feels natural to use and is also sufficiently powerful to take care of deployment of the respective stack components for you. This is where the <STACK_COMPONENT> deploy CLI comes in!

The deploy command allows you to deploy individual components of your MLOps stack with a single command 🚀. You can also customize your components easily by passing in flags (more on that later).

For example, to deploy an MLflow tracking server on a GCP account, you can run:

zenml experiment-tracker deploy my_tracker --flavor=mlflow --cloud=gcp --project_id="zenml"

The command above takes in the following parameters:

  • Name: The name of the stack component. In this case, it is my_tracker . If you don't provide a name, a random one is generated for you.

  • Flavor: What flavor of the stack component to deploy. Here, we are deploying an MLflow experiment tracker.

  • Cloud: What cloud to deploy this stack component on. Currently, only GCP, AWS, and k3d are supported as providers).

  • Additional Configuration: Some components can be customized by the user and these settings are passed as flags to the command. In the example above, we pass the GCP project ID to select what project to deploy the component to.

A successful execution of this command does the following:

  • Asks for your confirmation on the resources that will be deployed.

  • Once you agree, it starts the deployment process and gives you a list of outputs at the end pertaining to your deployed stack component (the text in green in the screenshot below).

  • It also automatically registers the deployed stack component with your ZenML server, so you don't have to worry about manually configuring components after the deployment! 🤩

The command currently uses your local credentials for GCP and AWS to provision resources. Integration with your ZenML connectors might be possible soon too!

Want to know what happens in the background?

The stack component deploy CLI is powered by ZenML's Stack Recipes in the background, more specifically the new modular recipes. These allow you to configure and deploy select stack components as opposed to deploying the full stack, as with the legacy stack recipes.

Using the values you pass for the cloud, the CLI picks up the right modular recipe to use (one of AWS, GCP, or k3d) and then deploys that recipe with the specific stack component enabled.

The recipe files live in the Global Config directory under the deployed_stack_components directory.

🍨 Available flavors for stack components

Here's a table of all the flavors that can be deployed through the CLI for every stack component. This is a list that will keep on growing and you can also contribute any flavor or stack component that you feel is missing. Refer to the Contribution page for steps on how to do that 😄

How does flavor selection work in the background?

Whenever you pass in a flavor to any stack-component deploy function, the combination of these two parameters is used to construct a variable name in the following format:

enable_<STACK_COMPONENT>_<FLAVOR>

This variable is then passed as input to the underlying modular recipe. If you check the variables.tf file for a given recipe, you can find all the supported flavor-stack component combinations there.

Component TypeFlavor

Experiment Tracker

mlflow

Model Deployer

seldon

kserve

Artifact Store

s3

gcs

minio

Orchestrator

kubernetes

kubeflow

tekton

sagemaker

vertex

Step Operator

sagemaker

vertex

Container Registry

gcr

ecr

k3d-registry

✨ Customizing your stack components

With simplicity, we didn't want to compromise on the flexibility that this deployment method allows. As such, we have added the option to pass configuration specific to the stack components as key-value arguments to the deploy CLI. Here is an assortment of all possible configurations that can be set.

How do configuration flags work?

The flags that you pass to the deploy CLI are passed on as-is to the backing modular recipes as input variables. This means that all the flags need to be defined as variables in the respective recipe.

For example, if you take a look at the variables.tf file for a modular recipe, like the gcp-modular recipe, you can find variables like mlflow_bucket that correspond to the --mlflow-bucket flag that can be passed to the experiment tracker's deploy CLI.

Validation for these flags does not exist yet at the CLI level, so you must be careful in naming them while calling deploy.

Experiment Trackers

You can assign an existing bucket to the MLflow experiment tracker by using the --mlflow_bucket flag:

zenml experiment-tracker deploy mlflow_tracker --flavor=mlflow --mlflow_bucket=gs://my_bucket

Artifact Stores

For an artifact store, you can pass bucket_name as an argument to the command.

zenml artifact-store deploy s3_artifact_store --flavor=s3 --bucket_name=my_bucket

Container Registries

For container registries you can pass the repository name using repo_name:

zenml container-registry deploy aws_registry --flavor=aws --repo_name=my_repo

This is only useful for the AWS case since AWS requires a repository to be created before pushing images to it and the deploy command ensures that a repository with the name you provide is created. In case of GCP and other providers, you can choose the repository name at the same time as you are pushing the image via code. This is achieved through setting the target_repo attribute of the DockerSettings object.

Other configuration

  • You can also pass a region to deploy your resources to in the case of AWS and GCP recipes. For example, to deploy an S3 artifact store in the us-west-2 region, you can run:

zenml artifact-store deploy s3_artifact_store --flavor=s3 --region=us-west-2

The default region is eu-west-1 for AWS and europe-west1 for GCP.

Changing regions is not recommended as it can lead to unexpected results for components that share infrastructure like Kubernetes clusters. If you must do so, please destroy all the stack components from the older region by running the destroy command and then redeploy using the deploy command.

  • In the case of GCP components, it is required that you pass a project ID to the command for the first time you're creating any GCP resource. The command will remember the project ID for subsequent calls.

🧹 Destroying deployed stack components

You can destroy a stack component using the destroy subcommand. For example, to destroy an S3 artifact store you had previously created, you could run:

zenml artifact-store destroy s3_artifact_store
How does ZenML know where my component is deployed?

When you create a component using the deploy CLI, ZenML attaches some labels to your component, specifically, a cloud label that tells it what cloud your component is deployed on.

This in-turn, helps ZenML to figure out what modular recipe to use to destroy your deployed component.

You can check the labels attached to your stack components by running:

zenml <STACK_COMPONENT> describe <NAME>

Last updated