The deployer is responsible for serving a trained model to an endpoint. It can be added to a TrainingPipeline through the add_deployment() method.

Standard Deployers

There are some standard deployers built-in to ZenML for common deployment scenarios.


Deploys the model directly to a Google Cloud AI Platform end-point.

from zenml.core.steps.deployer.gcaip_deployer import GCAIPDeployer



Currently, the GCAIPDeployer only works with Trainers fully implementing the TFBaseTrainerStep interface. An example is the standard tf_ff_trainer.FeedForwardTrainer step.

How to make a request to your served model

Google Cloud AI Platform is using TFServing under-the-hood. TFServing has defined standards on how to communicate with a model.

A good example to request predictions from TFServing models can be found here.

Create custom deployer

The mechanism to create a custom Deployer will be published in more detail soon in this space. However, the details of this are currently being worked out and will be made available in future releases.

If you need this functionality earlier, then ping us on our Slack or create an issue on GitHub so that we know about it!

Downloading a trained model

The model will be present in the TrainerStep artifacts directory. You can retrieve this URI directly from a pipeline by executing: