Hyperparameter tuning
Running a hyperparameter tuning trial with ZenML.
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
Hyperparameter Tuning
Hyperparameter tuning is not yet a first-class citizen in ZenML, but it is (high up) on our roadmap of features and will likely receive first-class ZenML support soon. In the meanwhile, the following example shows how hyperparameter tuning can currently be implemented within a ZenML run.
A basic iteration through a number of hyperparameters can be achieved with ZenML by using a simple pipeline like this:
This is an implementation of a basic grid search (across a single dimension) that would allow for a different learning rate to be used across the same train_step
. Once that step has been run for all the different learning rates, the select_model_step
finds which hyperparameters gave the best results or performance.
The main challenge of this implementation is that it is currently not possible to pass a variable number of artifacts into a step programmatically, so the select_model_step
needs to query all artifacts produced by the previous steps via the ZenML Client instead:
Last updated