Control caching behavior

By default steps in ZenML pipelines are cached whenever code and parameters stay unchanged.

@step(enable_cache=True) # set cache behavior at step level
def load_data(parameter: int) -> dict:
    ...

@step(enable_cache=False) # settings at step level override pipeline level
def train_model(data: dict) -> None:
    ...

@pipeline(enable_cache=True) # set cache behavior at step level
def simple_ml_pipeline(parameter: int):
    ...

Caching only happens when code and parameters stay the same.

Like many other step and pipeline settings, you can also change this afterward:

# Same as passing it in the step decorator
my_step.configure(enable_cache=...)

# Same as passing it in the pipeline decorator
my_pipeline.configure(enable_cache=...)

ZenML Scarf

Last updated