Load artifacts from Model
One of the more common use-cases for a Model is to pass artifacts between pipelines (a pattern we have seen before). However, when and how to load these artifacts is important to know as well.
As an example, let's have a look at a two-pipeline project, where the first pipeline is running training logic and the second runs batch inference leveraging trained model artifact(s):
In the example above we used get_pipeline_context().model
property to acquire the model context in which the pipeline is running. During pipeline compilation this context will not yet have been evaluated, because Production
model version is not a stable version name and another model version can become Production
before it comes to the actual step execution. The same applies to calls like model.get_model_artifact("trained_model")
; it will get stored in the step configuration for delayed materialization which will only happen during the step run itself.
It is also possible to achieve the same using bare Client
methods reworking the pipeline code as follows:
In this case the evaluation of the actual artifact will happen only when the step is actually running.
Last updated