Module core.pipelines.infer_pipeline¶
Classes¶
BatchInferencePipeline(name: str, *args, **kwargs)
: BatchInferencePipeline definition to run batch inference pipelines.
A BatchInferencePipeline is used to run an inference based on a
TrainingPipeline.
Construct a base pipeline. This is a base interface that is meant
to be overridden in multiple other pipeline use cases.
Args:
name: Outward-facing name of the pipeline.
pipeline_name: A unique name that identifies the pipeline after
it is run.
enable_cache: Boolean, indicates whether or not caching
should be used.
steps_dict: Optional dict of steps.
backends_dict: Optional dict of backends
metadata_store: Configured metadata store. If None,
the default metadata store is used.
artifact_store: Configured artifact store. If None,
the default artifact store is used.
### Ancestors (in MRO)
* zenml.core.pipelines.base_pipeline.BasePipeline
### Class variables
`PIPELINE_TYPE`
:
### Methods
`get_default_backends(self) ‑> Dict`
: Gets list of default backends for this pipeline.
`get_tfx_component_list(self, config: Dict[str, Any]) ‑> List`
: Converts config to TFX components list. This is the point in the
framework where ZenML Steps get translated into TFX pipelines.
Args:
config: dict of ZenML config.