Module core.pipelines.deploy_pipeline


DeploymentPipeline(model_uri: str, name: str = None, enable_cache: Union[bool, NoneType] = True, steps_dict: Dict[str, zenml.core.steps.base_step.BaseStep] = None, backend: zenml.core.backends.orchestrator.base.orchestrator_base_backend.OrchestratorBaseBackend = None, metadata_store: Union[zenml.core.metadata.metadata_wrapper.ZenMLMetadataStore, NoneType] = None, artifact_store: Union[zenml.core.repo.artifact_store.ArtifactStore, NoneType] = None, datasource: Union[zenml.core.datasources.base_datasource.BaseDatasource, NoneType] = None, pipeline_name: Union[str, NoneType] = None) : BatchInferencePipeline definition to run batch inference pipelines.

A BatchInferencePipeline is used to run an inference based on a

Construct a deployment pipeline. This is a pipeline that deploys a
model to a target. Can be controlled via a DeployerStep.

    name: Outward-facing name of the pipeline.
    model_uri: URI for a model, usually generated by
    TrainingPipeline and retrieved by
    pipeline_name: A unique name that identifies the pipeline after
     it is run.
    enable_cache: Boolean, indicates whether or not caching
     should be used.
    steps_dict: Optional dict of steps.
    backend: Orchestrator backend.
    metadata_store: Configured metadata store. If None,
     the default metadata store is used.
    artifact_store: Configured artifact store. If None,
     the default artifact store is used.

### Ancestors (in MRO)

* zenml.core.pipelines.base_pipeline.BasePipeline

### Class variables


### Methods

`add_deployment(self, deployment_step: zenml.core.steps.deployer.base_deployer.BaseDeployerStep)`

`get_tfx_component_list(self, config: Dict[str, Any]) ‑> List`
:   Creates an inference pipeline out of TFX components.
    A inference pipeline is used to run a batch of data through a
    ML model via the BulkInferrer TFX component.
        config: Dict. Contains a ZenML configuration used to build the
         data pipeline.
        A list of TFX components making up the data pipeline.

`steps_completed(self) ‑> bool`
:   Returns True if all steps complete, else raises exception