FAQ

Find answers to the most frequently asked questions about ZenML.

Why did you build ZenML?

We built it because we scratched our own itch while deploying multiple machine-learning models in production over the past three years. Our team struggled to find a simple yet production-ready solution whilst developing large-scale ML pipelines. We built a solution for it that we are now proud to share with all of you! Read more about this backstory on our blog here.

Is ZenML just another orchestrator like Airflow, Kubeflow, Flyte, etc?

Not really! An orchestrator in MLOps is the system component that is responsible for executing and managing the execution of an ML pipeline. ZenML is a framework that allows you to run your pipelines on whatever orchestrator you like, and we coordinate with all the other parts of an ML system in production. There are standard orchestrators that ZenML supports out-of-the-box, but you are encouraged to write your own orchestrator in order to gain more control as to exactly how your pipelines are executed!

Can I use the tool X? How does the tool Y integrate with ZenML?

Take a look at our documentation (in particular the component guide), which contains instructions and sample code to support each integration that ZenML supports out-of-the-box. You can also check out our integration test code to see active examples of many of our integrations in action.

The ZenML team and community are constantly working to include more tools and integrations to the above list (check out the roadmap for more details). You can upvote features you'd like and add your ideas to the roadmap.

Most importantly, ZenML is extensible, and we encourage you to use it with whatever other tools you require as part of your ML process and system(s). Check out our documentation on how to get started with extending ZenML to learn more!

Do you support Windows?

ZenML officially supports Windows if you're using WSL. Much of ZenML will also work on Windows outside a WSL environment, but we don't officially support it and some features don't work (notably anything that requires spinning up a server process).

Do you support Macs running on Apple Silicon?

Yes, ZenML does support Macs running on Apple Silicon. You just need to make sure that you set the following environment variable:

export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES

This is a known issue with how forking works on Macs running on Apple Silicon and it will enable you to use ZenML and the server. This environment variable is needed if you are working with a local server on your Mac, but if you're just using ZenML as a client / CLI and connecting to a deployed server then you don't need to set it.

How can I make ZenML work with my custom tool? How can I extend or build on ZenML?

This depends on the tool and its respective MLOps category. We have a full guide on this over here!

How can I contribute?

We develop ZenML together with our community! To get involved, the best way to get started is to select any issue from the good-first-issue label. If you would like to contribute, please review our Contributing Guide for all relevant details.

How can I speak with the community?

The first point of the call should be our Slack group. Ask your questions about bugs or specific use cases and someone from the core team will respond.

Which license does ZenML use?

ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE.md in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.

Last updated