⛓️Build a pipeline

Building pipelines is as simple as adding the `@step` and `@pipeline` decorators to your code.

@step  # Just add this decorator
def load_data() -> dict:
    training_data = [[1, 2], [3, 4], [5, 6]]
    labels = [0, 1, 0]
    return {'features': training_data, 'labels': labels}


@step
def train_model(data: dict) -> None:
    total_features = sum(map(sum, data['features']))
    total_labels = sum(data['labels'])

    # Train some model here

    print(f"Trained model using {len(data['features'])} data points. "
          f"Feature sum is {total_features}, label sum is {total_labels}")


@pipeline  # This function combines steps together 
def simple_ml_pipeline():
    dataset = load_data()
    train_model(dataset)

You can now run this pipeline by simply calling the function:

simple_ml_pipeline()

When this pipeline is executed, the run of the pipeline gets logged to the ZenML dashboard where you can now go to look at its DAG and all the associated metadata. To access the dashboard you need to have a ZenML server either running locally or remotely. See our documentation on this here.

Check below for more advanced ways to build and interact with your pipeline.

ZenML Scarf

Last updated