Fetching historic runs using StepContext
All code in this guide can be found [here](https://github.com/zenml-io/zenml/tree/main/examples/fetch_historical_runs).

The need to fetch historic runs

Sometimes, it is necessary to fetch information from previous runs in order to make a decision within a currently executing step. Examples of this:
  • Fetch the best model evaluation results from history to decide whether to deploy a newly-trained model.
  • Fetching best model out of a list of trained models.
  • Fetching the latest model before running an inference.
And so on.

Utilizing StepContext

ZenML allows users to fetch historical parameters and artifacts using the StepContext fixture.
As an example, see this step that uses the StepContext to query the metadata store while running a step. We use this to evaluate all models of past training pipeline runs and store the current best model. In our inference pipeline, we could then easily query the metadata store to fetch the best performing model.
1
@step
2
def evaluate_and_store_best_model(
3
context: StepContext,
4
X_test: np.ndarray,
5
y_test: np.ndarray,
6
model: ClassifierMixin,
7
) -> ClassifierMixin:
8
"""Evaluate all models and return the best one."""
9
best_accuracy = model.score(X_test, y_test)
10
best_model = model
11
12
pipeline_runs = context.metadata_store.get_pipeline("mnist_pipeline").runs
13
for run in pipeline_runs:
14
# get the trained model of all pipeline runs
15
model = run.get_step("trainer").output.read()
16
accuracy = model.score(X_test, y_test)
17
if accuracy > best_accuracy:
18
# if the model accuracy is better than our currently-best model,
19
# store it
20
best_accuracy = accuracy
21
best_model = model
22
23
print(f"Best test accuracy: {best_accuracy}")
24
return best_model
Copied!
And that's it!
Export as PDF
Copy link
Edit on GitHub