MLflow
Deploying your models locally with MLflow.
This is an older version of the ZenML documentation. To read and view the latest version please visit this up-to-date URL.
MLflow
The MLflow Model Deployer is one of the available flavors of the Model Deployer stack component. Provided with the MLflow integration it can be used to deploy and manage MLflow models on a local running MLflow server.
The MLflow Model Deployer is not yet available for use in production. This is a work in progress and will be available soon. At the moment it is only available for use in a local development environment.
When to use it?
MLflow is a popular open-source platform for machine learning. It's a great tool for managing the entire lifecycle of your machine learning. One of the most important features of MLflow is the ability to package your model and its dependencies into a single artifact that can be deployed to a variety of deployment targets.
You should use the MLflow Model Deployer:
if you want to have an easy way to deploy your models locally and perform real-time predictions using the running MLflow prediction server.
if you are looking to deploy your models in a simple way without the need for a dedicated deployment environment like Kubernetes or advanced infrastructure configuration.
If you are looking to deploy your models in a more complex way, you should use one of the other Model Deployer Flavors available in ZenML.
How do you deploy it?
The MLflow Model Deployer flavor is provided by the MLflow ZenML integration, so you need to install it on your local machine to be able to deploy your models. You can do this by running the following command:
To register the MLflow model deployer with ZenML you need to run the following command:
The ZenML integration will provision a local MLflow deployment server as a daemon process that will continue to run in the background to serve the latest MLflow model.
How do you use it?
Deploy a logged model
ZenML provides a predefined mlflow_model_deployer_step
that you can use to deploy an MLflfow prediction service based on a model that you have previously logged in your MLflow experiment tracker:
The mlflow_model_deployer_step
expects that the model
it receives has already been logged to MLflow in a previous step. E.g., for a scikit-learn model, you would need to have used mlflow.sklearn.autolog()
or mlflow.sklearn.log_model(model)
in a previous step. See the MLflow experiment tracker documentation for more information on how to log models to MLflow from your ZenML steps.
Deploy from model registry
Alternatively, if you are already using the MLflow model registry, you can use the mlflow_model_registry_deployer_step
to directly deploy an MLflow prediction service based on a model in your model registry:
See the MLflow model registry documentation for more information on how to register models in the MLflow registry.
Run inference on a deployed model
The following code example shows how you can load a deployed model in Python and run inference against it:
For more information and a full list of configurable attributes of the MLflow Model Deployer, check out the SDK Docs .
Last updated