Seldon
How to deploy models to Kubernetes with Seldon Core
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
The Seldon Core Model Deployer is one of the available flavors of the Model Deployer stack component. Provided with the MLflow integration it can be used to deploy and manage models on an inference server running on top of a Kubernetes cluster.
When to use it?
Seldon Core is a production grade open source model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
Seldon Core also comes equipped with a set of built-in model server implementations designed to work with standard formats for packaging ML models that greatly simplify the process of serving models for real-time inference.
You should use the Seldon Core Model Deployer:
If you are looking to deploy your model on a more advanced infrastructure like Kubernetes.
If you want to handle the lifecycle of the deployed model with no downtime, including updating the runtime graph, scaling, monitoring, and security.
Looking for more advanced API endpoints to interact with the deployed model, including REST and GRPC endpoints.
If you want more advanced deployment strategies like A/B testing, canary deployments, and more.
if you have a need for a more complex deployment process which can be customized by the advanced inference graph that includes custom TRANSFORMER and ROUTER.
If you are looking for a more easy way to deploy your models locally, you can use the MLflow Model Deployer flavor.
How to deploy it?
ZenML provides a Seldon Core flavor build on top of the Seldon Core Integration to allow you to deploy and use your models in a production-grade environment. In order to use the integration you need to install it on your local machine to be able to register a Seldon Core Model deployer with ZenML and add it to your stack:
To deploy and make use of the Seldon Core integration we need to have the following prerequisites:
access to a Kubernetes cluster. The example accepts a
--kubernetes-context
command line argument. This Kubernetes context needs to point to the Kubernetes cluster where Seldon Core model servers will be deployed. If the context is not explicitly supplied to the example, it defaults to using the locally active context.Seldon Core needs to be preinstalled and running in the target Kubernetes cluster. Check out the official Seldon Core installation instructions.
models deployed with Seldon Core need to be stored in some form of persistent shared storage that is accessible from the Kubernetes cluster where Seldon Core is installed (e.g. AWS S3, GCS, Azure Blob Storage, etc.). You can use one of the supported remote storage flavors to store your models as part of your stack.
Since the Seldon Model Deployer is interacting with the Seldon Core model server deployed on a Kubernetes cluster, you need to provide a set of configuration parameters. These parameters are:
kubernetes_context: the Kubernetes context to use to contact the remote Seldon Core installation. If not specified, the current configuration is used. Depending on where the Seldon model deployer is being used
kubernetes_namespace: the Kubernetes namespace where the Seldon Core deployment servers are provisioned and managed by ZenML. If not specified, the namespace set in the current configuration is used.
base_url: the base URL of the Kubernetes ingress used to expose the Seldon Core deployment servers.
secret: the name of a ZenML secret containing the credentials used by Seldon Core storage initializers to authenticate to the Artifact Store in the ZenML stack. The secret must be registered using the
zenml secrets-manager secret register
command. The secret schema must be one of the built-in schemas provided by the Seldon Core integration:seldon_s3
for AWS S3,seldon_gs
for GCS, andseldon_az
for Azure.kubernetes_secret_name: the name of the Kubernetes secret that will be created to store the Seldon Core credentials. If not specified, the secret name will be derived from the ZenML secret name.
Configuring a Seldon Core in a Kubernetes cluster can be a complex and error-prone process, so we have provided a set of Terraform-based recipes to quickly provision popular combinations of MLOps tools. More information about these recipes can be found in the Open Source MLOps Stack Recipes.
Managing Seldon Core Credentials
Seldon Core Secret using ZenML Secrets Manager
The Seldon Core model servers need to retrieve model artifacts from the Artifact Store in the ZenML stack. This requires passing authentication credentials to the Seldon Core servers. To facilitate this, a ZenML secret must be created with the proper credentials and specified when registering the Seldon Core Model Deployer component using the --secret argument in the CLI command. To complete the configuration, the s3-store ZenML secret must be set as a Seldon Model Deployer configuration attribute.
Built-in secret schemas are provided by the Seldon Core integration for the 3 main supported Artifact Stores: S3, GCS, and Azure. The secret schemas are seldon_s3
for AWS S3, seldon_gs
for GCS, and seldon_az
for Azure. For more information on secrets, secret schemas, and their usage in ZenML, refer to the Secrets Manager documentation.
Seldon Core Secret using Kubernetes Secrets
Alternatively, you can create a Kubernetes secret containing the credentials required by Seldon Core to access the Artifact Store. The secret must be created in the same namespace where the Seldon Core model servers are deployed.
Each of the supported Artifact Stores has a different set of required credentials. For more information on the required credentials, refer to the Rclone documentation for the S3, GCS, and Azure backends.
The following example shows how to create a Kubernetes secret for Minio which is an S3-compatible object storage server.
How do you use it?
For registering the model deployer, we need the URL of the Istio Ingress Gateway deployed on the Kubernetes cluster. We can get this URL by running the following command (assuming that the service name is istio-ingressgateway
, deployed in the istio-system
namespace):
Now register the model deployer:
We can now use the model deployer in our stack.
The following code snippet shows how to use the Seldon Core Model Deployer to deploy a model inside a ZenML pipeline step:
Within the SeldonDeploymentConfig
you can configure:
model_name
: the name of the model in the KServe cluster and in ZenML.replicas
: the number of replicas with which to deploy the modelimplementation
: the type of Seldon inference server to use for the model. The implementation type can be one of the following:TENSORFLOW_SERVER
,SKLEARN_SERVER
,XGBOOST_SERVER
,custom
.parameters
: an optional list of parameters (SeldonDeploymentPredictorParameter
) to pass to the deployment predictor in a form of:name
type
value
resources
: the resources to be allocated to the model. This can be configured by passing aSeldonResourceRequirements
object with therequests
andlimits
properties. The values for these properties can be a dictionary with thecpu
andmemory
keys. The values for these keys can be a string with the amount of CPU and memory to be allocated to the model.
A concrete example of using the Seldon Core Model Deployer can be found here.
For more information and a full list of configurable attributes of the Seldon Core Model Deployer, check out the API Docs.
Custom Model Deployment
When you have a custom use-case where Seldon Core pre-packaged inference servers cannot cover your needs, you can leverage the language wrappers to containerise your machine learning model(s) and logic. With ZenML's Seldon Core Integration, you can create your own custom model deployment code by creating a custom predict function that will be passed to a custom deployment step responsible for preparing a Docker image for the model server.
This custom_predict
function should be getting the model and the input data as arguments and return the output data. ZenML will take care of loading the model into memory, starting the seldon-core-microservice
that will be responsible for serving the model, and running the predict function.
Then this custom predict function path
can be passed to the custom deployment parameters.
The full code example can be found here.
Advanced Custom Code Deployment with Seldon Core Integration
Before creating your custom model class, you should take a look at the custom Python model section of the Seldon Core documentation.
The built-in Seldon Core custom deployment step is a good starting point for deploying your custom models. However, if you want to deploy more than the trained model, you can create your own Custom Class and a custom step to achieve this.
Example of the custom class.
The built-in Seldon Core custom deployment step responsible for packaging, preparing and deploying to Seldon Core can be found here.
Last updated