Manage artifacts
Understand and adjust how ZenML versions your data.
Last updated
Was this helpful?
Understand and adjust how ZenML versions your data.
Last updated
Was this helpful?
Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifact—be it data, models, or evaluations—is automatically tracked and versioned upon pipeline execution.
This guide will delve into artifact versioning and management, showing you how to efficiently name, organize, and utilize your data with the ZenML framework.
Artifacts, the outputs of your steps and pipelines, are automatically versioned and stored in the artifact store. Configuring these artifacts is pivotal for transparent and efficient pipeline development.
Assigning custom names to your artifacts can greatly enhance their discoverability and manageability. As best practice, utilize the Annotated
object within your steps to give precise, human-readable names to outputs:
Artifacts named iris_dataset
can then be found swiftly using various ZenML interfaces:
To list artifacts: zenml artifact list
ZenML automatically versions all created artifacts using auto-incremented numbering. I.e., if you have defined a step creating an artifact named iris_dataset
as shown above, the first execution of the step will create an artifact with this name and version "1", the second execution will create version "2", and so on.
The next execution of this step will then create an artifact with the name iris_dataset
and version raw_2023
. This is primarily useful if you are making a particularly important pipeline run (such as a release) whose artifacts you want to distinguish at a glance later.
After execution, iris_dataset
and its version raw_2023
can be seen using:
To list versions: zenml artifact version list
If you would like to extend your artifacts and runs with extra metadata or tags you can do so by following the patterns demonstrated below:
The tool offers two complementary views for analyzing your metadata:
The tabular view provides a structured comparison of metadata across runs:
This view automatically calculates changes between runs and allows you to:
Sort and filter metadata values
Track changes over time
Compare up to 20 runs simultaneously
The parallel coordinates visualization helps identify relationships between different metadata parameters:
This view is particularly useful for:
Discovering correlations between different metrics
Identifying patterns across pipeline runs
Filtering and focusing on specific parameter ranges
To compare metadata across runs:
Navigate to any pipeline in your dashboard
Click the "Compare" button in the top navigation
Select the runs you want to compare
Switch between table and parallel coordinates views using the tabs
The tool preserves your comparison configuration in the URL, making it easy to share specific views with team members. Simply copy and share the URL to allow others to see the same comparison with identical settings and filters.
This feature is currently in Alpha Preview. We encourage you to share feedback about your use cases and requirements through our Slack community.
Assigning a type to an artifact allows ZenML to highlight them differently in the dashboard and also lets you filter your artifacts better.
While most pipelines start with a step that produces an artifact, it is often the case to want to consume artifacts external from the pipeline. The ExternalArtifact
class can be used to initialize an artifact within ZenML with any arbitrary data type.
For example, let's say we have a Snowflake query that produces a dataframe, or a CSV file that we need to read. External artifacts can be used for this, to pass values to steps that are neither JSON serializable nor produced by an upstream step:
If you would like to bypass materialization entirely and just download the data or files associated with a particular artifact version, you can use the .download_files
method:
Take note that the path must have the .zip
extension, as the artifact data will be saved as a zip file. Make sure to handle any exceptions that may arise from this operation.
Sometimes, artifacts can be produced completely outside of ZenML. A good example of this is the predictions produced by a deployed model.
You can also load any artifact stored within ZenML using the load_artifact
method:
Even if an artifact is created externally, it can be treated like any other artifact produced by ZenML steps - with all the functionalities described above!
Sometimes, data is produced completely outside of ZenML and can be conveniently stored on a given storage. A good example of this is the checkpoint files created as a side-effect of the Deep Learning model training. We know that the intermediate data of the deep learning frameworks is quite big and there is no good reason to move it around again and again, if it can be produced directly in the artifact store boundaries and later just linked to become an artifact of ZenML. Let's explore the Pytorch Lightning example to fit the model and store the checkpoints in a remote location.
Even if an artifact is created and stored externally, it can be treated like any other artifact produced by ZenML steps - with all the functionalities described above!
As an example, one can associate the results of a model training alongside a model artifact, the shape of a table alongside a pandas
dataframe, or the size of an image alongside a PNG file.
For some artifacts, ZenML automatically logs metadata. As an example, for pandas.Series
and pandas.DataFrame
objects, ZenML logs the shape and size of the objects:
A user can also add metadata to an artifact within a step directly using the log_artifact_metadata
method:
This section combines all the code from this section into one simple script that you can use easily:
The dashboard offers advanced visualization features for artifact exploration.
While ZenML handles artifact versioning automatically, you have the option to specify custom versions using the . This may come into play during critical runs like production releases.
Since custom versions cannot be duplicated, the above step can only be run once successfully. To avoid altering your code frequently, consider using a for artifact versioning.
There are multiple ways to interact with tags and metadata in ZenML. If you would like to how to use this information in different scenarios please check the respective guides on and .
The dashboard includes an Experiment Comparison tool that allows you to visualize and analyze metadata across different pipeline runs. This feature helps you understand patterns and changes in your pipeline's behavior over time.
Optionally, you can configure the ExternalArtifact
to use a custom for your data or disable artifact metadata and visualizations. Check out the for all available options.
It is also common to consume an artifact downstream after producing it in an upstream pipeline or step. As we have learned in the , the Client
can be used to fetch artifacts directly inside the pipeline code:
Calls of Client
methods like get_artifact_version
directly inside the pipeline code makes use of ZenML's behind the scenes.
For more details and use-cases check-out detailed docs page .
One of the most useful ways of interacting with artifacts in ZenML is the ability to associate metadata with them. , artifact metadata is an arbitrary dictionary of key-value pairs that are useful for understanding the nature of the data.
The dashboard offers advanced visualization features for artifact exploration, including a dedicated artifacts tab with metadata visualization:
For further depth, there is an that goes more into detail about logging metadata in ZenML.
Additionally, there is a lot more to learn about artifacts within ZenML. Please read the for more information.