Batch InferenceΒΆ

BatchInference pipelines are meant to run batch jobs to run a bunch of data through a trained ML model.

The current version of zenml does not fully support BatchInference as of this moment.

If you need this functionality earlier, then ping us on our Slack or create an issue on GitHub so that we know about it!