Module core.pipelines.infer_pipeline


BatchInferencePipeline(name: str, *args, **kwargs) : BatchInferencePipeline definition to run batch inference pipelines.

A BatchInferencePipeline is used to run an inference based on a

Construct a base pipeline. This is a base interface that is meant
to be overridden in multiple other pipeline use cases.

    name: Outward-facing name of the pipeline.
    pipeline_name: A unique name that identifies the pipeline after
     it is run.
    enable_cache: Boolean, indicates whether or not caching
     should be used.
    steps_dict: Optional dict of steps.
    backends_dict: Optional dict of backends
    metadata_store: Configured metadata store. If None,
     the default metadata store is used.
    artifact_store: Configured artifact store. If None,
     the default artifact store is used.

### Ancestors (in MRO)

* zenml.core.pipelines.base_pipeline.BasePipeline

### Class variables


### Methods

`get_default_backends(self) ‑> Dict`
:   Gets list of default backends for this pipeline.

`get_tfx_component_list(self, config: Dict[str, Any]) ‑> List`
:   Converts config to TFX components list. This is the point in the
    framework where ZenML Steps get translated into TFX pipelines.
        config: dict of ZenML config.