Deploying ZenML

Deploying ZenML is the first step to production.

When you first get started with ZenML, it is based on the following architecture on your machine:

Scenario 1: ZenML default local configuration

The SQLite database that you can see in this diagram is used to store all the metadata we produced in the previous guide (pipelines, models, artifacts, etc).

In order to move into production, you will need to deploy this server somewhere centrally outside of your machine. This allows different infrastructure components to interact with, alongside enabling you to collaborate with your team members:

Scenario 3: Deployed ZenML Server

Choosing how to deploy ZenML

While there are many options on how to deploy ZenML, the two simplest ones are:

Option 1: Sign up for a free ZenML Pro Trial

ZenML Pro comes as a managed SaaS solution that offers a one-click deployment for your ZenML server.

If you have the ZenML Python client already installed, you can fast-track to connecting to a trial ZenML Pro instance by simply running:

zenml login --pro

Alternatively, click here to start a free trial.

On top of the one-click SaaS experience, ZenML Pro also comes built-in with additional features and a new dashboard that might be beneficial to follow for this guide. You can always go back to self-hosting after your learning journey is complete.

Option 2: Self-host ZenML on your cloud provider

As ZenML is open source, it is easy to self-host it in a Kubernetes cluster. If you don't have an existing Kubernetes cluster, you can create it manually using the documentation for your cloud provider. For convenience, here are links for AWS, Azure, and GCP.

To learn more about different options for deploying ZenML, visit the deployment documentation.

Connecting to a deployed ZenML

You can connect your local ZenML client with the ZenML Server using the ZenML CLI and the web-based login. This can be executed with the command:

zenml login <server-url>

Having trouble connecting with a browser? There are other ways to connect. Read here for more details.

This command will start a series of steps to validate the device from where you are connecting that will happen in your browser. After that, you're now locally connected to a remote ZenML. Nothing of your experience changes, except that all metadata that you produce will be tracked centrally in one place from now on.

You can always go back to the local zenml experience by using zenml logout

Further resources

To learn more about deploying ZenML, check out the following resources:

  • Deploying ZenML: an overview of the different options for deploying ZenML and the system architecture of a deployed ZenML instance.

  • Full how-to guides: guides on how to deploy ZenML on Docker or Hugging Face Spaces or Kubernetes or some other cloud provider.

ZenML Scarf

Last updated