Learning how to switch the infrastructure backend of your code.
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
Understanding stacks
Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running your first pipelines, you might have already noticed the term stack in the logs and on the dashboard.
A stack is the configuration of tools and infrastructure that your pipelines can run on. When you run ZenML code without configuring a stack, the pipeline will run on the so-called default stack.
Separation of code from configuration and infrastructure
As visualized in the diagram above, there are two separate domains that are connected through ZenML. The left side shows the code domain. The user's Python code is translated into a ZenML pipeline. On the right side, you can see the infrastructure domain, in this case, an instance of the default stack. By separating these two domains, it is easy to switch the environment that the pipeline runs on without making any changes in the code. It also allows domain experts to write code/configure infrastructure without worrying about the other domain.
The default stack
You can explore all your stacks in the dashboard. When you click on a specific one you can see its configuration and all the pipeline runs that were executed using this stack.
zenml stack describe lets you find out details about your active stack:
As you can see a stack can be active on your client. This simply means that any pipeline you run will be using the active stack as its environment.
Components of a stack
As you can see in the section above, a stack consists of multiple components. All stacks have at minimum an orchestrator and an artifact store.
Orchestrator
The orchestrator is responsible for executing the pipeline code. In the simplest case, this will be a simple Python thread on your machine. Let's explore this default orchestrator.
zenml orchestrator list lets you see all orchestrators that are registered in your zenml deployment.
The artifact store is responsible for persisting the step outputs. As we learned in the previous section, the step outputs are not passed along in memory, rather the outputs of each step are stored in the artifact store and then loaded from there when the next step needs them. By default this will also be on your own machine:
zenml artifact-store list lets you see all artifact stores that are registered in your zenml deployment.
There are many more components that you can add to your stacks, like experiment trackers, model deployers, and more. You can see all supported stack component types in a single table view here
Perhaps the most important stack component after the orchestrator and the artifact store is the container registry. A container registry stores all your containerized images, which hold all your code and the environment needed to execute them. We will learn more about them in the next section!
Registering a stack
Just to illustrate how to interact with stacks, let's create an alternate local stack. We start by first creating a local artifact store.
Let's understand the individual parts of this command:
artifact-store : This describes the top-level group, to find other stack components simply run zenml --help
register : Here we want to register a new component, instead, we could also update , delete and more zenml artifact-store --help will give you all possibilities
my_artifact_store : This is the unique name that the stack component will have.
--flavor=local: A flavor is a possible implementation for a stack component. So in the case of an artifact store, this could be an s3-bucket or a local filesystem. You can find out all possibilities with zenml artifact-store flavor --list
This will be the output that you can expect from the command above.
Let's use the pipeline in our starter project from the previous guide to see it in action.
If you have not already, clone the starter template:
pipinstall"zenml[templates,server]"notebookzenmlintegrationinstallsklearn-ymkdirzenml_startercdzenml_starterzenmlinit--templatestarter--template-with-defaults# Just in case, we install the requirements againpipinstall-rrequirements.txt
Above doesn't work? Here is an alternative
The starter template is the same as the ZenML quickstart. You can clone it like so: