Kaniko Image Builder
How to build container images with Kaniko
The Kaniko image builder is an image builder flavor provided with the ZenML
kaniko
integration that uses Kaniko to build container images.You should use the Kaniko image builder if:
- you're familiar with/already using Kubernetes.
In order to use the Kaniko image builder, you need a deployed Kubernetes cluster.
To use the Kaniko image builder, we need:
- The ZenML
kaniko
integration installed. If you haven't done so, runzenml integration install kaniko - By default, the Kaniko image builder transfers the build context using the Kubernetes API. If you instead want to transer the build context by storing it in the artifact store, you need to register it with the
store_context_in_artifact_store
attribute set toTrue
. In this case, you also need a remote artifact store as part of your stack.
We can then register the image builder and use it in our active stack:
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT>
# Register and activate a stack with the new image builder
zenml stack register <STACK_NAME> -i <NAME> ... --set
For more information and a full list of configurable attributes of the Kaniko image builder, check out the API Docs.
The Kaniko image builder will create a Kubernetes pod which is running the build. This build pod needs to be able to pull from/push to certain container registries, and depending on the stack component configuration also needs to be able to read from the artifact store:
- The pod needs to be authenticated to push to the container registry in your active stack.
- In case the parent image you use in your
DockerSettings
is stored in a private registry, the pod needs to be authenticated to pull from this registry. - If you configured your image builder to store the build context in the artifact store, the pod needs to be authenticated to read files from the artifact store storage.
ZenML is not yet able to handle setting all of the credentials of the various combinations of container registries and artifact stores on the Kaniko build pod, which is you're required to set this up yourself for now. The following section outlines how to handle it in the most straightforward (and probably also most common) scenario, when the Kubernetes cluster you're using for the Kaniko build is hosted on the same cloud provider as your container registry (and potentially the artifact store). For all other cases, check out the official Kaniko repository for more information.
AWS
GCP
Azure
- Add permissions to push to ECR by attaching the
EC2InstanceProfileForImageBuilderECRContainerBuilds
policy to your EKS node IAM role. - Configure the image builder to set some required environment variables on the Kaniko build pod:
# register a new image builder with the environment variables
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]'
# or update an existing one
zenml image-builder update <NAME> \
--env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]'
- Follow the steps described here to create a Google service account, Kubernetes service account as well as a IAM policy binding between them.
- Grant the Google service account permissions to push to your GCR registry and read from your GCP bucket.
- Configure the image builder to run in the correct namespace and use the correct service account:
# register a new image builder with namespace and service account
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--kubernetes_namespace=<KUBERNETES_NAMESPACE> \
--service_account_name=<KUBERNETES_SERVICE_ACCOUNT_NAME>
# or update an existing one
zenml image-builder update <NAME> \
--kubernetes_namespace=<KUBERNETES_NAMESPACE> \
--service_account_name=<KUBERNETES_SERVICE_ACCOUNT_NAME>
- Create a Kubernetes configmap for a Docker config that uses the Azure credentials helper:
kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }'
- Configure the image builder to mount the configmap in the Kaniko build pod:
# register a new image builder with the mounted configmap
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \
--volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]'
# or update an existing one
zenml image-builder update <NAME> \
--volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \
--volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]'
If you want to pass additional flags to the Kaniko build, pass them as a json string when registering your image builder in the stack:
zenml image-builder register <NAME> \
--flavor=kaniko \
--kubernetes_context=<KUBERNETES_CONTEXT> \
--executor_args='["--label", "key=value"]' # Adds a label to the final image
Last modified 5d ago