LogoLogo
ProductResourcesGitHubStart free
  • Documentation
  • Learn
  • ZenML Pro
  • Stacks
  • API Reference
  • SDK Reference
  • Overview
  • Integrations
  • Stack Components
    • Orchestrators
      • Local Orchestrator
      • Local Docker Orchestrator
      • Kubeflow Orchestrator
      • Kubernetes Orchestrator
      • Google Cloud VertexAI Orchestrator
      • AWS Sagemaker Orchestrator
      • AzureML Orchestrator
      • Databricks Orchestrator
      • Tekton Orchestrator
      • Airflow Orchestrator
      • Skypilot VM Orchestrator
      • HyperAI Orchestrator
      • Lightning AI Orchestrator
      • Develop a custom orchestrator
    • Artifact Stores
      • Local Artifact Store
      • Amazon Simple Cloud Storage (S3)
      • Google Cloud Storage (GCS)
      • Azure Blob Storage
      • Develop a custom artifact store
    • Container Registries
      • Default Container Registry
      • DockerHub
      • Amazon Elastic Container Registry (ECR)
      • Google Cloud Container Registry
      • Azure Container Registry
      • GitHub Container Registry
      • Develop a custom container registry
    • Step Operators
      • Amazon SageMaker
      • AzureML
      • Google Cloud VertexAI
      • Kubernetes
      • Modal
      • Spark
      • Develop a Custom Step Operator
    • Experiment Trackers
      • Comet
      • MLflow
      • Neptune
      • Weights & Biases
      • Google Cloud VertexAI Experiment Tracker
      • Develop a custom experiment tracker
    • Image Builders
      • Local Image Builder
      • Kaniko Image Builder
      • AWS Image Builder
      • Google Cloud Image Builder
      • Develop a Custom Image Builder
    • Alerters
      • Discord Alerter
      • Slack Alerter
      • Develop a Custom Alerter
    • Annotators
      • Argilla
      • Label Studio
      • Pigeon
      • Prodigy
      • Develop a Custom Annotator
    • Data Validators
      • Great Expectations
      • Deepchecks
      • Evidently
      • Whylogs
      • Develop a custom data validator
    • Feature Stores
      • Feast
      • Develop a Custom Feature Store
    • Model Deployers
      • MLflow
      • Seldon
      • BentoML
      • Hugging Face
      • Databricks
      • vLLM
      • Develop a Custom Model Deployer
    • Model Registries
      • MLflow Model Registry
      • Develop a Custom Model Registry
  • Service Connectors
    • Introduction
    • Complete guide
    • Best practices
    • Connector Types
      • Docker Service Connector
      • Kubernetes Service Connector
      • AWS Service Connector
      • GCP Service Connector
      • Azure Service Connector
      • HyperAI Service Connector
  • Popular Stacks
    • AWS
    • Azure
    • GCP
    • Kubernetes
  • Deployment
    • 1-click Deployment
    • Terraform Modules
    • Register a cloud stack
    • Infrastructure as code
  • Contribute
    • Custom Stack Component
    • Custom Integration
Powered by GitBook
On this page
  • When to use it
  • How to deploy it
  • How to use it

Was this helpful?

Edit on GitHub
  1. Stack Components
  2. Image Builders

Kaniko Image Builder

Building container images with Kaniko.

PreviousLocal Image BuilderNextAWS Image Builder

Last updated 1 month ago

Was this helpful?

The Kaniko image builder is an flavor provided by the ZenML kaniko integration that uses to build container images.

When to use it

You should use the Kaniko image builder if:

  • you're unable to install or use on your client machine.

  • you're familiar with/already using Kubernetes.

How to deploy it

In order to use the Kaniko image builder, you need a deployed Kubernetes cluster.

How to use it

To use the Kaniko image builder, we need:

  • The ZenML kaniko integration installed. If you haven't done so, run

    zenml integration install kaniko
  • installed.

  • A as part of your stack.

  • By default, the Kaniko image builder transfers the build context using the Kubernetes API. If you instead want to transfer the build context by storing it in the artifact store, you need to register it with the store_context_in_artifact_store attribute set to True. In this case, you also need a as part of your stack.

  • Optionally, you can change the timeout (in seconds) until the Kaniko pod is running in the orchestrator using the pod_running_timeout attribute.

We can then register the image builder and use it in our active stack:

zenml image-builder register <NAME> \
    --flavor=kaniko \
    --kubernetes_context=<KUBERNETES_CONTEXT>
    [ --pod_running_timeout=<POD_RUNNING_TIMEOUT_IN_SECONDS> ]

# Register and activate a stack with the new image builder
zenml stack register <STACK_NAME> -i <NAME> ... --set

Authentication for the container registry and artifact store

The Kaniko image builder will create a Kubernetes pod that is running the build. This build pod needs to be able to pull from/push to certain container registries, and depending on the stack component configuration also needs to be able to read from the artifact store:

  • The pod needs to be authenticated to push to the container registry in your active stack.

  • If you configured your image builder to store the build context in the artifact store, the pod needs to be authenticated to read files from the artifact store storage.

  • Configure the image builder to set some required environment variables on the Kaniko build pod:

# register a new image builder with the environment variables
zenml image-builder register <NAME> \
    --flavor=kaniko \
    --kubernetes_context=<KUBERNETES_CONTEXT> \
    --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]'

# or update an existing one
zenml image-builder update <NAME> \
    --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]'
  • Grant the Google service account permissions to push to your GCR registry and read from your GCP bucket.

  • Configure the image builder to run in the correct namespace and use the correct service account:

# register a new image builder with namespace and service account
zenml image-builder register <NAME> \
    --flavor=kaniko \
    --kubernetes_context=<KUBERNETES_CONTEXT> \
    --kubernetes_namespace=<KUBERNETES_NAMESPACE> \
    --service_account_name=<KUBERNETES_SERVICE_ACCOUNT_NAME>
    # --executor_args='["--compressed-caching=false", "--use-new-run=true"]'

# or update an existing one
zenml image-builder update <NAME> \
    --kubernetes_namespace=<KUBERNETES_NAMESPACE> \
    --service_account_name=<KUBERNETES_SERVICE_ACCOUNT_NAME>
  • Create a Kubernetes configmap for a Docker config that uses the Azure credentials helper:

kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }'
  • Configure the image builder to mount the configmap in the Kaniko build pod:

# register a new image builder with the mounted configmap
zenml image-builder register <NAME> \
    --flavor=kaniko \
    --kubernetes_context=<KUBERNETES_CONTEXT> \
    --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \
    --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]'
    # --executor_args='["--compressed-caching=false", "--use-new-run=true"]'

# or update an existing one
zenml image-builder update <NAME> \
    --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \
    --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]'

Passing additional parameters to the Kaniko build

You can pass additional parameters to the Kaniko build by setting the executor_args attribute of the image builder.

zenml image-builder register <NAME> \
    --flavor=kaniko \
    --kubernetes_context=<KUBERNETES_CONTEXT> \
    --executor_args='["--label", "key=value"]' # Adds a label to the final image

List of some possible additional flags:

  • --cache: Set to false to disable caching. Defaults to true.

  • --cache-dir: Set the directory where to store cached layers. Defaults to /cache.

  • --cache-repo: Set the repository where to store cached layers.

  • --cache-ttl: Set the cache expiration time. Defaults to 24h.

  • --cleanup: Set to false to disable cleanup of the working directory. Defaults to true.

  • --compressed-caching: Set to false to disable compressed caching. Defaults to true.

For more information and a full list of configurable attributes of the Kaniko image builder, check out the .

In case the you use in your DockerSettings is stored in a private registry, the pod needs to be authenticated to pull from this registry.

ZenML is not yet able to handle setting all of the credentials of the various combinations of container registries and artifact stores on the Kaniko build pod, which is you're required to set this up yourself for now. The following section outlines how to handle it in the most straightforward (and probably also most common) scenario, when the Kubernetes cluster you're using for the Kaniko build is hosted on the same cloud provider as your container registry (and potentially the artifact store). For all other cases, check out the for more information.

Add permissions to push to ECR by attaching the EC2InstanceProfileForImageBuilderECRContainerBuilds policy to your .

Check out for more information.

for your cluster

Follow the steps described to create a Google service account, a Kubernetes service account as well as an IAM policy binding between them.

Check out for more information.

Follow to configure your cluster to use a managed identity

Check out for more information.

For a full list of possible flags, check out the

image builder
Kaniko
Docker
kubectl
remote container registry
remote artifact store
SDK Docs
parent image
official Kaniko repository
EKS node IAM role
the Kaniko docs
Enable workload identity
here
the Kaniko docs
these steps
the Kaniko docs
Kaniko additional flags
ZenML Scarf