Using Helmfile with Terraform

Last Updated on

Reading Time: 4 minutes

It’s common to use Terraform to provision Kubernetes clusters in the cloud. Which means that a lot of the variables that are useful to pass into Helm Charts are conveniently available in Terraform.

In many cases the control plane is a managed service so configuration must be run locally and pointed at a remote endpoint.

People gravitate towards using the Kubernetes and Helm Terraform providers and then face issues related to statefulness.

In this tutorial we’ll demonstrate how to use Terraform in conjunction with the docker_container resource to execute Helmfile against a remote Kubernetes cluster. Running all of the configuration in a Docker container keeps all of the dependency management nice and neat.

Setup K3s

First we need to clone the terraform-helmfile repo and then execute docker-compose up -d.

This starts K3s which is a lightweight Kubernetes cluster. Bridge networking is enabled so we can connect other Docker containers to it later.

Run Terraform

Now we can run terraform init followed by terraform apply to install a sample Kubernetes dashboard chart using Helmfile.

When we execute terraform apply a Docker container is started by Terraform which contains the Helmfile binary. Terraform also renders some configuration files inside the container which specify how to connect to the K3s cluster and what charts to install.

Finally, a local-exec is used to poll the docker logs for the container and exit with a valid status code depending on success or failure.

Run it again..

Helmfile is idempotent and will only make changes if something changes. You can test this by executing terraform apply again.

At the bottom of the output the second time we run Terraform it says “No affected releases” and then exits successfully.

A look at the code

Let’s walk through the code in the Git repo and discuss how you can use this in your projects.

Terraform

We can start by looking at the Terraform code that starts the container in main.tf.

Here we’re creating a Docker container called terraform-helmfile that starts the quay.io/roboll/helmfile:v0.80.2 image.

For our demo I’ve added a link to the k3s-server container and rendered the kubeconfig.yaml into the container. This gives us the connection to the Kubernetes cluster. When using this for real we can remove the link and render a kubeconfig.yaml from variables pointing at a real cluster.

The upload blocks show we’re also copying an entrypoint.sh that executes helmfile and the helmfile.yaml which specifies what Helm Charts to install.

The depends_on ensures that our docker container is always deleted before each new terraform run.

Now let’s look at the data_sources.tf.

It’s kind of basic in our example code. In a real world example we’d pass in terraform variables into each template_file resource.

Then we can use these variables in our helmfile.yaml.

Here you can see I’ve used set the version and installed to match those variables from above.

Helmfile

A lot of the magic that happens with upgrades is defined by the options passed into helm upgrade. I’ve set these as defaults in our config.

For our options we’ve chosen to go tillerless and set atomic, force and recreatePods so that our chart upgrades work nicely. I had failures downloading charts with verify enabled so that’s set to false.

Let’s also look at the entrypoint.sh that the container executes when it is run.

The sed line is a bit of hack to point our kubeconfig at the k3s-server linked container. This wouldn’t be required when connecting to a remote cluster with a properly rendered kubeconfig.

Most of the entrypoint is related to setting up Helm. We initialise the repos and then install the helm-tiller plugin. This is required as we set tillerless: true in our helmfile.yaml.

The actual execution of helmfile apply is fairly boring. We just execute it and redirect all output to stdout so everything is visible using docker logs.

Tailing the logs

We have our local exec that starts a small Python script.

This depends on the docker_container resource being created. We pass in the container id to logtail.py so the script knows what container to tail.

Then our logtail.py script executes the docker logs and prints the output so that we get it in the terraform output.

This initially runs docker logs -f <container id> and yields the log lines. It stops when the container disappears.

At the end we inspect the container exit code using docker inspect. The exit code gets converted to an int and we exit with whatever the container reported so that Terraform apply will fail if Helmfile fails.

Summary

We used Terraform to execute a local Docker container running Helmfile. We used Terraform to render some configuration into the container and execute it against a remote K3s cluster.

All log output is printed out during the Terraform run. Also, Terraform will fail is Helmfile fails.

Finally, zero state is stored in the container or Terraform statefiles. Helmfile enumerates the state based upon its desired state file and what’s currently running on the cluster. We specified many options to help reduce the chance that deployments get blocked by Chart state issues.

Hopefully this has been useful. If anyone notices any bugs please feel free to submit a pull request on the terraform-helmfile Github repo.

Related

It has been a while since I did some blogs. Although, I have been making subtle changes behind the scenes to make the…

Read more

Container platforms provide a magical experience for developers. Autoscaling stateless applications is one compelling…

Read more

I've been using EKS in production for a small number of months now and so far, so good. Really impressed by the…

Read more

Tell us about a new Kubernetes application

Newsletter

Never miss a thing! Sign up for our newsletter to stay updated.

About

Discover and learn about everything Kubernetes

Navigation
Follow