Terraform
Terraform
November 13, 2024

Kubernetes Terraform Provider

By
Brendan Thompson

When it comes to deploying and managing the internals of a Kubernetes cluster in your environment there are many management options from a helm chart, raw manifests, kustomize and more. However from my experiences and best practices in mind if your organization is already using Terraform (or OpenTofu) then using Terraform to provide you with full lifecycle management of Kubernetes resources is the best option. Using the Terraform Kubernetes provider allows for a streamlined experience for not only your current team members but those coming into the organization. The HashiCorp configuration language (HCL) helps to provide that common language used to describe and build any service/resource/anything-you-can-dream-of in your organization.

In this post we will dive into configuring the official Kubernetes provider to build out Kubernetes services across your environment irrespective or the underlying cloud provider, as really that is one of the main selling points of Kubernetes.

Provider Configuration

The following link on the Terraform registry is where you can get the most up to date information about the Kubernetes Provider. A unique aspect of the Kubernetes provider is its use of versioned resources and data sources, these versions are aligned to the API version with Kubernetes itself and allows for strong backwards compatibility. An example of this being the kubernetes secrets resources as per below:

resource "kubernetes_secret_v1" "versioned" {}
resource "kubernetes_secret" "unversioned" {}

Additional information can be found out about the Kubernetes API and its versions here.

The provider allows numerous ways to configure it, these can be grouped into two categories explicit and implicit. Where the former is done by supplying configuration arguments directly into the provider block and the latter utilizes environment variables. When configuring the provider it is essential to consider sensitive information such as keys and tokens in order to prevent a security issue. I would generally strongly encourage the sensitive details being passed in via the CLI or environment variables. With those two categories in mind lets walk through some examples of configuring the provider for authentication with the cluster:

  • File configuration
  • Credential configuration
  • In-cluster configuration
  • Exec plugins

File configuration

File configuration is a great way to configure the provider when working locally with Terraform as it simply takes a path to the Kubeconf which contains all relevant details, further to this you can constrain it to a given context or the default context will be used.

provider "kubernetes" {  
	config_path    = "~/.kube/config"  
	config_context = "azure-prod-cluster"
}

This would grab the relevant details from the Kubeconf to authenticate the provider with that particular cluster. Alternatively the environment variables KUBE_CONFIG_PATH and CONFIG_CONTEXT can be used.

Credentials configuration

This method of configuration allows for directly passing in the file contents for relevant authentication arguments, this approach can be very powerful when your Terraform orchestrator dynamically provides those files either for a single environment or different environments allowing you to have the same configuration sprayed across multiple clusters. Lets dive into those arguments:

provider "kubernetes" {
  host = "https://kubernetes.dev.example.com:26443"

  client_certificate     = file("~/.kube/client-cert.pem")
  client_key             = file("~/.kube/client-key.pem")
  cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem")
}

These arguments are very self-explanatory, in that they do what they say. We are passing in the paths to the file builtin function which ingests the contents as a UTF-8 encoded string. This configuration can additionally be done by using data sources within a given cloud platform allowing for truly dynamic configuration of this provider.

In-cluster configuration

If you're running your Terraform commands within a Kubernetes cluster and it is that cluster you wish to execute on the the in-cluster configuration is perfect. This configuration is done via the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables allowing Terraform to configure resources within the cluster.

Exec plugins

Cloud providers, when it comes to Kubernetes all give their consumers the ability to retrieve the Kubeconf from the managed clusters using CLI utilities. The Exec plugin type of configuration allows for exactly this scenario where you want to execute an external command to retrieve the correct details at runtime. The below example uses AWS' EKS service.

provider "kubernetes" {
  host                   = var.cluster_endpoint
  cluster_ca_certificate = base64decode(var.cluster_ca_cert)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
    command     = "aws"
  }
}

The contents of the exec block could easily be subbed out for the Azure variant:

exec {
  api_version = "client.authentication.k8s.io/v1beta1"
  args        = [ "aks", "get-credentials", "--resource-group", var.cluster_resource_group_name, "--name", var.cluster_name ]
  command     = "az"
}

This in my opinion is another unique feature of the Kubernetes Terraform provider and one I feel is extremely valuable, as it allows you to access external mechanisms to authenticate from within the provider declaration itself.

Examples

In the below example we are going to look at deploying a very simple service to its own namespace using Terraform.

locals {
  labels = {
    app = "example"
    env = "dev"
  }
}

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "orbstack"
}

resource "kubernetes_namespace_v1" "this" {
  metadata {
    name = "example"
  }
}

resource "kubernetes_deployment_v1" "this" {
  metadata {
    name      = "example-deploy"
    namespace = kubernetes_namespace_v1.this.metadata[0].name
    labels    = local.labels
  }

  spec {
    replicas = 3

    selector {
      match_labels = local.labels
    }

    template {
      metadata {
        name      = "example"
        namespace = kubernetes_namespace_v1.this.metadata[0].name
        labels    = local.labels
      }

      spec {
        container {
          name  = "example"
          image = "karthequian/helloworld:latest"
          port {
            container_port = 8080
          }
        }
      }
    }
  }
}

resource "kubernetes_service_v1" "this" {
  metadata {
    name      = "example-service"
    namespace = kubernetes_namespace_v1.this.metadata[0].name
    labels    = local.labels
  }

  spec {
    type = "LoadBalancer"
    port {
      port        = 80
      target_port = 8080
    }
    selector = local.labels
  }
}

The above creates us a service distributing across our three pods created by the deployment as can be seen by the below diagram.

The advantage of using OpenTofu/Terraform for provisioning our Kubernetes services is that it is a common language that teams can understand irrespective of the service/infrastructure they’re trying to deploy at a given moment. It also means that we have access to the extremely powerful features of OpenTofu/Terraform and example of this above is reference the namespace rather than typing it out as well as the locals for our labels.

Closing Out

In this post we walked through the Kubernetes provider, how to set it up and configure it. There were four main ways for us to do this; File configuration, Credential configuration, In-cluster configuration, Exec plugins these give engineers the flexibility to configure Kubernetes in whatever way is required for a given situation. Finally we closed out by looking at an example of how we can use Terraform to deploy an instance of something to Kubernetes.

Note: While this blog references Terraform, everything mentioned in here also applies to OpenTofu. New to OpenTofu? It is a fork of Terraform 1.5.7 as a result of the license change from MPL to BUSL by HashiCorp. OpenTofu is an open-source alternative to Terraform that is governed by the Linux Foundation. All features available in Terraform 1.5.7 or earlier are also available in OpenTofu. Find out the history of OpenTofu here.

Don't take our word for it, try it for yourself.

A screenshot of the modules page in the Scalr Platform