Develop a Custom Model Deployer
How to develop a custom model deployer
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
To deploy and manage your trained machine learning models, ZenML provides a stack component called Model Deployer
. This component is responsible for interacting with the deployment tool, framework or platform.
When present in a stack, the model deployer can also acts as a registry for models that are served with ZenML. You can use the model deployer to list all models that are currently deployed for online inference or filtered according to a particular pipeline run or step, or to suspend, resume or delete an external model server managed through ZenML.
Base Abstraction
In ZenML, the base abstraction of the model deployer is built on top of three major criteria:
It needs to contain all the stack-related configuration attributes required to interact with the remote model serving tool, service or platform (e.g. hostnames, URLs, references to credentials, other client-related configuration parameters).
It needs to implement the continuous deployment logic necessary to deploy models in a way that updates an existing model server that is already serving a previous version of the same model instead of creating a new model server for every new model version (see the
deploy_model
abstract method). This functionality can be consumed directly from ZenML pipeline steps, but it can also be used outside the pipeline to deploy ad-hoc models. It is also usually coupled with a standard model deployer step, implemented by each integration, that hides the details of the deployment process from the user.It needs to act as a ZenML BaseService registry, where every BaseService instance is used as an internal representation of a remote model server (see the
find_model_server
abstract method). To achieve this, it must be able to re-create the configuration of a BaseService from information that is persisted externally, alongside or even as part of the remote model server configuration itself. For example, for model servers that are implemented as Kubernetes resources, the BaseService instances can be serialized and saved as Kubernetes resource annotations. This allows the model deployer to keep track of all externally running model servers and to re-create their corresponding BaseService instance representations at any given time. The model deployer also defines methods that implement basic life-cycle management on remote model servers outside the coverage of a pipeline (seestop_model_server
,start_model_server
anddelete_model_server
).
Putting all these considerations together, we end up with the following interface:
This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. In order to see the full implementation and get the complete docstrings, please check the API docs.
Building your own model deployers
If you want to create your own custom flavor for a model deployer, you can follow the following steps:
Create a class which inherits from the
BaseModelDeployer
.Define the
FLAVOR
class variable.Implement the
abstactmethod
s based on the API of your desired model deployer.
Once you are done with the implementation, you can register it through the CLI as:
Last updated