ZenML comes equipped with numerous classic techniques to interact with the artifacts, once a training pipeline is finished.
First of all, you can go ahead and take a peek at the schema of your dataset by using the
view_schema method of your pipeline instance.
Furthermore, you can check the statistics which are yielded by your datasource and split configuration through the method
training_pipeline.view_statistics() # try setting magic=True in a Jupyter Notebook
You can also evaluate the results of your training by using the
evaluate method of your pipeline. On default, it will create and serve a notebook with two distinct cells dedicated to two different tools.
Tensorboard can help you to understand the behavior of your model during the training session
TFMA or tensorflow_model_analysis can help you assess your already trained model based on given metrics and slices on the evaluation dataset
training_pipeline.evaluate() # try setting magic=True in a Jupyter Notebook
Note: If you have already set up your slices in the configuration of the evaluator and want to see the sliced results, comment in the last line and adjust it according to the slicing column. In the end, it should look something like this:
Evaluation however should go beyond individual pipeline executions. A direct comparison of the pipelines within a repository can allow you to judge the performance and results of configuration against each other.
from zenml.repo import Repositoryrepo: Repository = Repository.get_instance()repo.compare_training_runs()
This will open up your browser to a local web app, which will help to compare the results of different pipeline runs.