Module core.backends.orchestrator.base.orchestrator_base_backend

Definition of the base Orchestrator Backend


OrchestratorBaseBackend(**kwargs) : Local ZenML orchestrator backend. Use this to run a ZenML pipeline locally on a machine.

An orchestrator backend is responsible for scheduling, initializing and
running different pipeline components. Examples of orchestrators are
Apache Beam, Kubeflow or (here) Local Orchestration.

Abstracting the pipeline logic from the orchestrator backend enables
machine learning workloads to be run in different kinds of environments.
For larger, decentralized data processing applications, a cloud-based
backend can be used to distribute work across multiple machines.
For quick prototyping and local tests, a single-machine direct backend can
be selected to execute an ML Pipeline with minimal orchestration overhead.

### Ancestors (in MRO)

* zenml.core.backends.base_backend.BaseBackend

### Class variables


### Static methods

`get_tfx_pipeline(config: Dict[str, Any]) ‑> tfx.orchestration.pipeline.Pipeline`
:   Converts ZenML config dict to TFX pipeline.
        config: A ZenML config dict
        tfx_pipeline: A TFX pipeline object.

### Methods

`run(self, config: Dict[str, Any])`
:   This run function essentially calls an underlying TFX orchestrator run.
    However it is meant as a higher level abstraction with some
    opinionated decisions taken.
        config: a ZenML config dict