Step Operators
Executing individual steps in specialized environments.
Last updated
Executing individual steps in specialized environments.
Last updated
The step operator enables the execution of individual pipeline steps in specialized runtime environments that are optimized for certain workloads. These specialized environments can give your steps access to resources like GPUs or distributed processing frameworks like Spark.
Comparison to orchestrators: The orchestrator is a mandatory stack component that is responsible for executing all steps of a pipeline in the correct order and providing additional features such as scheduling pipeline runs. The step operator on the other hand is used to only execute individual steps of the pipeline in a separate environment in case the environment provided by the orchestrator is not feasible.
A step operator should be used if one or more steps of a pipeline require resources that are not available in the runtime environments provided by the orchestrator. An example would be a step that trains a computer vision model and requires a GPU to run in a reasonable time, combined with a Kubeflow orchestrator running on a Kubernetes cluster that does not contain any GPU nodes. In that case, it makes sense to include a step operator like SageMaker, Vertex, or AzureML to execute the training step with a GPU.
Step operators to execute steps on one of the big cloud providers are provided by the following ZenML integrations:
Step Operator | Flavor | Integration | Notes |
---|---|---|---|
If you would like to see the available flavors of step operators, you can use the command:
You don't need to directly interact with any ZenML step operator in your code. As long as the step operator that you want to use is part of your active ZenML stack, you can simply specify it in the @step
decorator of your step.
If your steps require additional hardware resources, you can specify them on your steps as described here.
Note that if you wish to use step operators to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
sagemaker
aws
Uses SageMaker to execute steps
vertex
gcp
Uses Vertex AI to execute steps
azureml
azure
Uses AzureML to execute steps
kubernetes
kubernetes
Uses Kubernetes Pods to execute steps
spark
spark
Uses Spark on Kubernetes to execute steps in a distributed manner
custom
Extend the step operator abstraction and provide your own implementation