Fetch runs after execution
Inspecting a finished pipeline run and its outputs.
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
Fetch runs after execution
Once a pipeline run has been completed, we can interact with it from code using the post-execution utilities. The hierarchy is as follows:
As you can see from the diagram, there are many layers of 1-to-N relationships. To get a specific output you need to know exactly which step in which run of which specific pipeline version to use.
Let us investigate how to traverse this hierarchy level by level:
Pipelines
ZenML keeps a collection of all created pipelines. With get_pipeline()
you can get a specific pipeline.
List all pipelines
You can also access a list of all your pipelines through the CLI by executing the following command on the terminal:
Or directly from code
Pipelines are sorted from oldest to newest. For this sorting, it matters which pipeline had the first initial run.
Get a pipeline
Instead of passing the name of the pipeline in, you can also directly use the pipeline instance get_pipeline(pipeline=first_pipeline)
.
Versions
Each pipeline can have many versions. Let's print out the contents of the PipelineView
:
This should return the following:
This is how we'll access one specific version:
The sorting of versions on a PipelineView
is from newest to oldest with the most recent versions at the beginning of the list.
Runs
Getting runs from a fetched pipeline version
Each pipeline version can be executed many times. You can get a list of all runs using the runs
attribute of a PipelineVersionView
:
This should return the following:
And this is how we access the most recent run
The sorting of runs on a PipelineVersionView
is from newest to oldest with the most recent runs at the beginning of the list.
Getting runs from a pipeline instance:
Alternatively, you can also access the runs from the pipeline class/instance itself:
Directly getting a run
Finally, you can also access a run directly with the get_run(run_name=...)
:
Use the CLI
You can also access your runs through the CLI by executing the following command on the terminal:
Runs configuration
Each run has a collection of useful metadata which you can access to ensure all runs are reproducible.
Git SHA
The Git commit SHA that the pipeline run was performed on. This will only be set if the pipeline code is in a git repository and there are no uncommitted files when running the pipeline.
Status
The status of a pipeline run can also be found here. There are four possible states: failed, completed, running, and cached:
Configuration
The pipeline_configuration
is an object that contains all configurations of the pipeline and pipeline run, including pipeline-level BaseSettings
, which we will learn more about later. You can also access the settings directly via the settings
variable.
Docstring
If you wrote a docstring into your pipeline function, you can retrieve it here as well:
Component-specific metadata
Depending on the stack components you use, you might have additional component-specific metadata associated with your run, such as the URL to the UI of a remote orchestrator. You can access this component-specific metadata via the metadata
attribute:
Steps
Within a given pipeline run you can now further zoom in on individual steps using the steps
attribute or by querying a specific step using the get_step(step=...)
method.
If you're only calling each step once inside your pipeline, the invocation ID will be the same as the name of your step. For more complex pipelines, check out this page to learn more about the invocation ID.
The steps are ordered by the time of execution. Depending on the orchestrator, steps can be run in parallel. Thus, accessing steps by an index is unreliable across different runs. You should access steps by the step class, an instance of the class, or even the name of the step as a string: get_step(step=...)
instead.
Similar to the run, for reproducibility, you can use the step
object to access:
The parameters used to run the step via
step.parameters
,The step-level settings via
step.step_configuration
,Component-specific step metadata, such as the URL of an experiment tracker or model deployer, via
step.metadata
,Input and output artifacts.
Outputs
Finally, this is how you can inspect the output of a step:
If there only is a single output, use the
output
attributeIf there are multiple outputs, use the
outputs
attribute, which is a dictionary that can be indexed using the name of an output:
The names of the outputs can be found in the Output
typing of your steps:
Visualizing Artifacts
ZenML automatically saves visualizations for many common data types. For instance, 3D NumPy Arrays with three channels are automatically visualized as images and data validation reports as embedded HTML visualizations. In Jupyter Notebooks, you can view the visualization of an artifact using the visualize()
method:
If you want to visualize multiple artifacts generated by the same step or pipeline run, you can also call visualize()
on the step or run directly:
In all other runtime environments, please open your ZenML dashboard using zenml up
and view the visualizations by clicking on the respective artifact in the pipeline run DAG.
Output Artifact Metadata
All output artifacts saved through ZenML will automatically have certain datatype-specific metadata saved with them. NumPy Arrays, for instance, always have their storage size, shape
, dtype
, and some statistical properties saved with them. You can access such metadata via the metadata
attribute of an output, e.g.:
Code Example
Putting it all together, this is how we can access the output of the last step of our example pipeline from the previous sections:
or alternatively:
Final note
While most of this document has been focusing on the so-called post-execution workflow (i.e. fetching objects after a pipeline has been completed), it can also be used within the context of a running pipeline.
This is often desirable in cases where a pipeline is running continuously over time and decisions have to be made according to older runs.
E.g., we can fetch from within a step the last pipeline run for the same pipeline:
You can get a lot more metadata within a step as well, something we'll learn in more detail in the advanced docs.
Last updated