Build container images in Kubernetes with kaniko

Building container images on Kubernetes is a desirable capability. It opens new avenues of continuous integration (CI). We are no longer tied to using something like Jenkins to schedule our CI build jobs. We can use the scheduling proficiency of Kubernetes instead.

There are multiple ways of building OCI container images today. The "old fashioned" method of using Docker is still popular and improving every day. It brings the exciting concept of Docker-in-Docker (or DinD) to the table as well. There's img from Genuine Tools or Buildah from Red Hat. They are all great options to get the job done. Unfortunately, they're all very limited when it comes to building images in Kubernetes. Enters kaniko from Google. As the tag line for it says, "Build Container Images In Kubernetes".

The kaniko README on GitHub is a pretty good introduction. But as is always the case, the devil is in the detail. That is the reason I'm writing this post.

kaniko can build a container image without needing a Docker daemon. It can run in a Docker container (say on Docker for Mac) as well as on Kubernetes. The aforementioned README does warn us,

kaniko is meant to be run as an image, gcr.io/kaniko-project/executor. We do not recommend running the kaniko executor binary in another image, as it might not work.

kaniko, as of this writing, supports pushing the newly built image to Google GCR and Amazon ECR. It supports sourcing the build context -- Dockerfile and other files needed to build the image -- from a local directory in the container running kaniko in addition to from either Google GCS or Amazon S3 bucket.

With these parameters in view, I set about running kaniko in Docker on my localhost and in Kubernetes in Amazon EKS. My goal was to build a container image to run kubectl.

ECR Repository and IAM

First thing was to create an ECR repository (to push the image to) and associated IAM configuration to allow kaniko to work with it. I used Terraform for this task.

# ecr.tf
provider "aws" {
profile                 = "${var.profile}"
shared_credentials_file = "${var.creds_file}"
region                  = "${var.region}"
}

resource "aws_ecr_repository" "kubectl" {
name = "kubectl"
}

resource "aws_iam_group" "ecr-power-user" {
name = "ecr-power-user"
path = "/"
}

resource "aws_iam_user" "kaniko" {
name          = "kaniko"
path          = "/"
force_destroy = false
}

resource "aws_iam_group_membership" "ecr-power-user" {
name = "ecr-power-user"

users = [
    "${aws_iam_user.kaniko.name}",
]

group = "${aws_iam_group.ecr-power-user.name}"
}

resource "aws_iam_group_policy_attachment" "kaniko_AmazonEC2ContainerRegistryPowerUser" {
group      = "${aws_iam_group.ecr-power-user.name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser"
}

resource "aws_iam_access_key" "kaniko" {
user    = "${aws_iam_user.kaniko.name}"
pgp_key = "${var.gpg_public_key}"
}

output "key-id" {
value = "${aws_iam_access_key.kaniko.id}"
}

output "secret-key" {
value = "${aws_iam_access_key.kaniko.encrypted_secret}"
}

AWS Credentials

The two output values in the above Terraform config allowed me to construct an AWS credentials file to be used by kaniko. For example,

# credentials
[default]
aws_access_key_id=RandomIdString
aws_secret_access_key=ChangeMeOrGoHome
region=us-east-1

Replace the values in this file according to your requirements.

Dockerfile

The third thing needed was a Dockerfile to build the image with kubectl installed,

# Dockerfile
FROM alpine

RUN apk add --update bash curl && \
    adduser -D -s /bin/bash me && \
    curl -sL -o /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && \
    chmod +x /usr/local/bin/kubectl && \
    curl -sL -o /usr/local/bin/heptio-authenticator-aws https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_0.3.0_linux_amd64 && \
    chmod +x /usr/local/bin/heptio-authenticator-aws && \
    mkdir -p /home/me/.kube

USER me

These resources were immensely helpful,

Docker config.json

kaniko's container image comes with a default config.json, mounted in /kaniko/.docker/, which is geared towards GCR. Since in my case I was using ECR, I had to create a different config, like do,

# config.json
{
    "credHelpers": {
        "123123123.dkr.ecr.us-east-1.amazonaws.com": "ecr-login"
    }
}

In the file above, replace 123123123 with the account ID of your own AWS account and replace us-east-1 with the region that suits you.

Run kaniko in Docker

I first ran kaniko in Docker to learn more about how it should be used. I could have easily run it in Kubernetes first as well.

Pull the kaniko executor image,

$ docker pull gcr.io/kaniko-project/executor:latest

Run kaniko in Docker,

$ docker run \
    --rm \
    -v $(PWD)/Dockerfile:/workspace/Dockerfile \
    -v $(PWD)/credentials:/root/.aws/credentials:ro \
    -v $(PWD)/config.json:/kaniko/.docker/config.json:ro \
    gcr.io/kaniko-project/executor:latest \
        --context dir:///workspace/ \
        --dockerfile Dockerfile \
        --destination 123123123.dkr.ecr.us-east-1.amazonaws.com/kubectl:latest

In the command above, replace 123123123 with the account ID of your own AWS account and replace us-east-1 with the region that suits you.

If the ~/.aws/credentials file you created above does not have a default profile, you will need to provide environment variable AWS_PROFILE with its value being the profile to use. I got an error when I didn't have a default profile and I didn't set the environment variable: "error pushing image: failed to push to destination 123123123.dkr.ecr.us-east-1.amazonaws.com/kubectl:latest: unsupported status code 401; body;". I believe it's related to /kaniko/docker-credential-ecr-login.

I also made the mistake of mounting config.json to /root/.docker/config.json instead of /kaniko/.docker/config.json. That caused the same error as above: "error pushing image: failed to push to destination 123123123.dkr.ecr.us-east-1.amazonaws.com/kubectl:latest: unsupported status code 401; body;". It wasn't easy to spot the difference between expected and actual paths because they so closely resembled each other.

The kaniko project also provides a debug image in case you need to figure out what's going wrong. This image was immensely helpful to me. Use the debug image like so,

$ docker run \
    -it \
    --entrypoint /busybox/sh \
    --rm \
    -v $(PWD)/Dockerfile:/workspace/Dockerfile \
    -v $(PWD)/credentials:/root/.aws/credentials:ro \
    -v $(PWD)/config.json:/kaniko/.docker/config.json:ro \
    gcr.io/kaniko-project/executor:debug

Notice how in the debug image I start a Bourne shell session in interative mode instead of running the kaniko executor binary with all the other configuration intact.

Run kaniko in Kubernetes

Once I got a container image built with kaniko in Docker and pushed it to ECR, I was ready to run kaniko in Kubernetes. These were the main steps,

  • Create a separate namespace to isolate things
  • Create a secret to store AWS credentials
  • Create a configmap to store config.json
  • Create a configmap to store Dockerfile
  • Create a pod manifest to run kaniko

Create Kubernetes namespace manifest,

# namespace.yml
"kind": "Namespace"
"apiVersion": "v1"
"metadata":
    "name": "ns-kaniko"
    "labels":
        "name": "ns-kaniko"

Create Kubernetes namespace,

$ kubectl apply -f namespace.yml

Create secret,

$ kubectl --namespace ns-kaniko create secret generic aws-creds --from-file=credentials

Keep in mind that the names of the file must match the name as it should appear in the pod when the secret is mounted as a volume. I ran into issues when I tried to have a different name for the file from which the secret is created than the expected name in the pod. For example, if --from-file=credentials was instead --from-file=aws-credentials, the file mounted in the pod would be /root/.aws/aws-credentials instead of /root/.aws/credentials as expected.

Create configmaps,

$ kubectl --namespace ns-kaniko create configmap config-json --from-file=config.json
$ kubectl --namespace ns-kaniko create configmap build-context --from-file=Dockerfile

Similar to the name of the file in secret above, the names of the file must match the name as it should appear in the pod when the configmap is mounted as a volume.

Create pod manifest, similar to the docker run command above. We want to keep things as similar as possible so we get the same results.

# kaniko.yml
kind: Pod
apiVersion: v1
metadata:
    name: kaniko
spec:
    containers:
        - name: kaniko
        image: gcr.io/kaniko-project/executor:latest
        args:
            - "--context=dir:///workspace"
            - "--dockerfile=Dockerfile"
            - "--destination=123123123.dkr.ecr.us-east-1.amazonaws.com/kubectl:latest"
        volumeMounts:
            - name: aws-creds
              mountPath: /root/.aws/
              readOnly: true
            - name: config-json
              mountPath: /kaniko/.docker/
              readOnly: true
            - name: build-context
              mountPath: /workspace/
              readOnly: true
    restartPolicy: Never
    volumes:
        - name: aws-creds
          secret:
              secretName: aws-creds
        - name: config-json
          configMap:
              name: config-json
        - name: build-context
          configMap:
              name: build-context

Create kaniko pod,

$ kubectl --namespace ns-kaniko apply -f kaniko.yml

Track the logs of this pod,

$ kubectl --namespace ns-kaniko logs kaniko

Similar to running kaniko debug container in Docker above, we can run a debug container in Kubernetes. Create a separate pod manifest,

# kaniko-debug.yml
kind: Pod
apiVersion: v1
metadata:
    name: kaniko-debug
spec:
    containers:
        - name: kaniko-debug
        image: gcr.io/kaniko-project/executor:debug
        command:
            - "/busybox/cat"
        args:
            - "/workspace/Dockerfile"
            - "/root/.aws/credentials"
            - "/kaniko/.docker/config.json"
        volumeMounts:
            - name: aws-creds
              mountPath: /root/.aws/
              readOnly: true
            - name: config-json
              mountPath: /kaniko/.docker/
              readOnly: true
            - name: build-context
              mountPath: /workspace/
              readOnly: true
    restartPolicy: Never
    volumes:
        - name: aws-creds
          secret:
              secretName: aws-creds
        - name: config-json
          configMap:
              name: config-json
        - name: build-context
          configMap:
              name: build-context

Unlike debug in Docker, here we don't start a Bourne shell to troubleshoot. Instead, we just run the commands we need.

$ kubectl --namespace ns-kaniko apply -f kaniko-debug.yml

Later, we can look at the pod logs,

$ kubectl --namespace ns-kaniko logs kaniko-debug

Conclusion

Building container images with kaniko involves a few more steps than I had imagined before I began. I stumbled and struggled with some things, especially when trying to debug when things went wrong. But when I got it all working it was pretty straightforward to continue to make it work. I recommend trying this out to see if kaniko can bring advantages to your work. It certainly has to mine.