Comment on page
Find answers to the most frequently asked questions about ZenML
Is ZenML just another orchestrator like Airflow, Kubeflow, Flyte, etc?
Not really! An orchestrator in MLOps is the system component that is responsible for executing and managing the execution of a ML pipeline. ZenML is a framework that allows you to run your pipelines on whatever orchestrator you like, and we coordinate with all the other parts of an ML system in production. There are standard orchestrators that ZenML supports out-of-the-box, but you are encouraged to write your own orchestrator in order to gain more control as to exactly how your pipelines are executed!
Can I use tool X? How does tool Y integrate with ZenML?
Most importantly, ZenML is extensible and we encourage you to use it with whatever other tools you require as part of your ML process and system(s). Check out our documentation on how to get started with extending ZenML to learn more!
How do I install on an M1 Mac
If you have an M1 Mac machine and you are encountering an error while trying to install ZenML, please try to setup brew and pyenv with Rosetta 2 and then install ZenML. The issue arises because some of the dependencies aren’t fully compatible with the vanilla ARM64 Architecture. The following links may be helpful.
How can I make ZenML work with my custom tool? How can I extend or build on ZenML?
Why did you build ZenML?
We built it because we scratched our own itch while deploying multiple machine learning models in production over the past three years. Our team struggled to find a simple yet production-ready solution whilst developing large-scale ML pipelines. We built a solution for it that we are now proud to share with all of you! Read more about this backstory on our blog here.
How can I contribute?
How can I learn more about MLOps?
Why should I use ZenML?
ZenML pipelines are designed to be written early on the development lifecycle. Data scientists can explore their pipelines as they develop towards production, switching stacks from local to cloud deployments with ease. You can read more about why we started building ZenML on our blog. By using ZenML in the early stages of your project, you get the following benefits:
- Extensible so you can build out the framework to suit your specific needs
- Reproducibility of training and inference workflows
- A simple and clear way to represent the steps of your pipeline in code
- Batteries-included integrations: bring all your favorite tools together
- Easy switch between local and cloud stacks
- Painless deployment and configuration of infrastructure
How can I be sure you'll stick around as a tool?
Plus, ZenML is and always will be an open-source effort, which lowers the risk of it just going away any time soon.
How can I speak with the community?
Which license does ZenML use?
ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE.md in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.
Last modified 1mo ago