Find answers to the most frequently asked questions about ZenML
Not really! An orchestrator in MLOps is the system component that is responsible for executing and managing the execution of an ML pipeline. ZenML is a framework that allows you to run your pipelines on whatever orchestrator you like, and we coordinate with all the other parts of an ML system in production. There are standard orchestrators that ZenML supports out-of-the-box, but you are encouraged to write your own orchestrator in order to gain more control as to exactly how your pipelines are executed!
Take a look at our examples directory, which showcases detailed examples for each integration that ZenML supports out-of-the-box.
The ZenML team and community is constantly working to include more tools and integrations to the above list (check out the roadmap for more details). You can upvote features you'd like and add your ideas to the roadmap.
Most importantly, ZenML is extensible, and we encourage you to use it with whatever other tools you require as part of your ML process and system(s). Check out our documentation on how to get started with extending ZenML to learn more!
We built it because we scratched our own itch while deploying multiple machine learning models in production over the past three years. Our team struggled to find a simple yet production-ready solution whilst developing large-scale ML pipelines. We built a solution for it that we are now proud to share with all of you! Read more about this backstory on our blog here.
We would love to develop ZenML together with our community! The best way to get started is to select any issue from the
good-first-issuelabel. If you would like to contribute, please review our Contributing Guide for all relevant details.
Check out our ZenBytes repository and course, where you learn MLOps concepts in a practical manner with the ZenML framework. Other great resources are:
ZenML pipelines are designed to be written early on the development lifecycle. Data scientists can explore their pipelines as they develop towards production, switching stacks from local to cloud deployments with ease. You can read more about why we started building ZenML on our blog. By using ZenML in the early stages of your project, you get the following benefits:
- Extensible so you can build out the framework to suit your specific needs
- Reproducibility of training and inference workflows
- A simple and clear way to represent the steps of your pipeline in code
- Batteries-included integrations: bring all your favorite tools together
- Easy switch between local and cloud stacks
- Painless deployment and configuration of infrastructure
The team behind ZenML have a shared vision of making MLOps simple and accessible to accelerate problem-solving in the world. We recently raised our seed round to fulfill this vision, and you can be sure we're here to stay!
Plus, ZenML is and always will be an open-source effort, which lowers the risk of it just going away any time soon.
The first point of call should be our Slack group. Ask your questions about bugs or specific use cases and someone from the core team will respond.
ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE.md in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.