Perform Drift Detection
Guard against data drift with our Evidently integration.
Data drift is something you often want to guard against in your pipelines. Machine learning pipelines are built on top of data inputs, so it is worth checking for drift if you have a model that was trained on a certain distribution of data. What follows is an example of how we use one drift detection tool that ZenML has currently integrated with. This takes the form of a standard step that you can use to make the relevant calculations.

🗺 Overview

Evidently is a useful open-source library to painlessly check for data drift (among other features). At its core, Evidently's drift detection takes in a reference data set and compares it against another comparison dataset. These are both input in the form of a Pandas DataFrame, though CSV inputs are also possible. You can receive these results in the form of a standard dictionary object containing all the relevant information, or as a visualization. ZenML supports both outputs.
ZenML implements this functionality in the form of several standardized steps. You select which of the profile sections you want to use in your step by passing a string into the EvidentlyProfileConfig. Possible options supported by Evidently are:
  • "datadrift"
  • "categoricaltargetdrift"
  • "numericaltargetdrift"
  • "classificationmodelperformance"
  • "regressionmodelperformance"
  • "probabilisticmodelperformance"
  • "dataquality" (NOT CURRENTLY IMPLEMENTED)

🧰 How to validate data inside a ZenML step

With Evidently, we compare two separate DataFrames. ZenML provides custom steps which you can set up for drift detection as in the following code:
1
from zenml.integrations.evidently.steps import (
2
EvidentlyProfileConfig,
3
EvidentlyProfileStep,
4
)
5
6
# instead of defining the step yourself, we have done it for you
7
drift_detector = EvidentlyProfileStep(
8
EvidentlyProfileConfig(
9
column_mapping=None,
10
profile_section="datadrift",
11
)
12
)
Copied!
Here you can see that defining the step is extremely simple using our class-based interface, and then you just have to pass in the two dataframes for the comparison to take place.
This could be done at the point when you are defining your pipeline:
1
from zenml.integrations.constants import EVIDENTLY, SKLEARN
2
from zenml.pipelines import pipeline
3
4
@pipeline(required_integrations=[EVIDENTLY, SKLEARN])
5
def drift_detection_pipeline(
6
data_loader,
7
data_splitter,
8
drift_detector,
9
drift_analyzer,
10
):
11
"""Links all the steps together in a pipeline"""
12
data = data_loader()
13
reference_dataset, comparison_dataset = data_splitter(data)
14
drift_report, _ = drift_detector(
15
reference_dataset=reference_dataset,
16
comparison_dataset=comparison_dataset,
17
)
18
drift_analyzer(drift_report)
Copied!
For the full context of this code, please visit our drift_detection example here. The key part of the pipeline definition above is when we use the datasets derived from the data_splitter step (i.e. function) and pass them in as arguments to the drift_detector function as part of the pipeline.
We even allow you to use the Evidently visualization tool easily to display data drift diagrams in your browser or within a Jupyter notebook:
Evidently drift visualization UI
Simple code like this would allow you to access the Evidently visualizer based on the completed pipeline run:
1
from zenml.integrations.evidently.visualizers import EvidentlyVisualizer
2
from zenml.repository import Repository
3
4
repo = Repository()
5
pipe = repo.get_pipelines()[-1]
6
evidently_outputs = pipe.runs[-1].get_step(name="drift_detector")
7
EvidentlyVisualizer().visualize(evidently_outputs)
Copied!
Export as PDF
Copy link
Edit on GitHub