Deploy Kubernetes Resources using Terraform

Deploy Kubernetes Resources using Terraform

Streamline Kubernetes with Terraform! Learn when and why to use it, benefits, and step-by-step setup for efficient, consistent deployments.

Saksham Awasthi
Book Icon - Software Webflow Template
17
 min read

As the configuration becomes more complicated, it can be difficult for Infrastructure or DevOps Engineers to manually deploy and manage Kubernetes using the cloud console or terminal. This frequently results in inconsistent setups and sluggish development, which leads to issues and downtime.

This blog will explain how to use infrastructure-as-code tools, such as Terraform, to deploy resources on Kubernetes. Although this approach is not always desired, it is beneficial and the best option in certain scenarios.

To understand this, let's first understand the different ways in which we can deploy resources on Kubernetes:

  1. Apply manifest files directly to your Kubernetes cluster.
  2. Bundle manifest files together, deploy, and version them using Helm.
  3. Use something like ArgoCD to sync manifests & Helm charts from Git to the cluster.
  4. Use an IaC like Terraform to deploy manifests on the cluster.

The are mainly two reasons to use Terraform to deploy on Kubernetes is:

  1. Sometimes, a “new cluster” won’t be just infrastructure: A “new cluster” needs some stuff on top of it to consider whether that cluster is ready for the next step. One example is using Terraform to deploy ArgoCD + some storageClasses + an ingress-controller on Kubernetes, and it will deploy the rest of what’s needed on the cluster from there.
  2. Sometimes, there are dependencies between things on top of Kubernetes and infrastructure outside Kubernetes - Terraform is excellent for solving this.

Terraform is helpful for the above two use cases for the following reasons:

  1. It enables managing Kubernetes resources AND external infrastructure in one workflow.
  2. It ensures predictable and consistent updates with state tracking.
  3. It supports multi-cloud or multi-region Kubernetes clusters.

Prerequisites

To follow this guide, you'll need:

  1. Familiarity with Kubernetes concepts like Pods, Deployments, Services, and ConfigMaps.
  2. Terraform CLI installed on your machine.
  3. Access to a Kubernetes cluster such as Minikube or AWS EKS Cluster.
  4. The Kubernetes CLI tool (kubectl) is configured to interact with your cluster.

When to use Terraform for Kubernetes?

Before we dive into the use cases of when you should genuinely use Terraform for Kubernetes, let's quickly go over what Terraform is.

What is Terraform

Terraform is an IaC infrastructure-as-code (IaC) tool that lets you define and provision infrastructure in configuration files (.tf). This is particularly useful when deploying resources to a cloud platform such as AWS, Azure, GCP, or on an on-premise server.

Terraform connects to cloud platforms and services via "Providers" and their APIs. To deploy an EC2 instance, you only need to write a Terraform script with the instance configuration, including type, AMI, and storage. The AWS Provider handles the remainder, including networking, storage, and computation configuration.

Use Cases

Let's dive into each of the use cases for deploying on Kubernetes with Terraform:

  1. Bootstrap a New Cluster with Essential Components
  • Challenge: The infrastructure alone is insufficient when provisioning a new Kubernetes cluster. Additional components such as ingress controllers, storage classes, and deployment tools are needed to prepare the cluster to host workloads. Along with these components, ensuring the consistency of operating procedure and reducing human errors becomes challenging, especially with a cluster that requires GitOps tools such as ArgoCD to focus on automating the deployment of kubernetes and manifest files from Git repositories to the cluster.
  • Solution: You can define these essential configurations alongside the cluster creation process with Terraform. For example, you can provision a cluster and install critical tools such as ArgoCD (via Helm or manifests) to deploy resources faster or set up ingress controllers and storage classes for your application workload. This will make sure that every cluster begins with the same base setup.
  • Benefit: Terraform defines your infrastructure in declarative configuration files, ensuring that every cluster is set up consistently and according to best practices. This approach eliminates the need for manual setup, reducing the risk of errors and saving time. Additionally, Terraform's ability to version control your infrastructure configurations allows for easy replication and scaling of environments and straightforward rollbacks if needed. By automating the setup process, Terraform enables teams to focus on deploying and managing applications rather than dealing with the complexities of infrastructure provisioning.
  1. Manage Dependencies Between Kubernetes and External Infrastructure
  • Challenge: Typically, applications running in Kubernetes require non-clustered resources, such as databases, DNS records,etc, which are external components. Managing these dependencies is error-prone and may result in configuration mismatches, slow or nonexistent deployments, and downtime.
  • Solution: Integrating Kubernetes resource provisioning with external infrastructure setup makes Terraform great for cross-infrastructure dependency management. For example, deploying a component on GKE with the DNS records on AWS Route 53. Terraform manages the creation of the DNS records and ensures that the external DNS records are correctly linked to the Kubernetes-based services. This integration ensures that all dependencies, whether within or outside the cluster, are provisioned together and tied correctly.
  • Benefit: Terraform wraps complex dependency management, ensuring the consistent, correct deployment of Kubernetes resources and their external dependencies. This reduces deployment errors, speeds up the deployment cycle, and facilitates integration across environments.

Example Scenario

  1. Using Terraform to provision an EKS Cluster within AWS Infrastructure.

The graphic above illustrates how Terraform uses an AWS provider to provision the Kubernetes cluster.

  • The developer prepares the Terraform configuration files to provision a Kubernetes cluster, such as an EKS cluster. These setups may include networking, security settings, or node count.
  • After receiving these configuration files, Terraform creates an infrastructure change plan. Terraform stores your infrastructure's current state in a state file to determine what needs to be changed for upcoming updates or deployments.
  • After completing the plan, Terraform uses the AWS API to deploy the modifications to the cloud provider's infrastructure, such as AWS. Terraform can send queries to AWS via this API to construct or modify cloud resources according to the manifest files.
  • Last but not least, AWS provides an Elastic Kubernetes Cluster (EKS), a fully managed service for Kubernetes that aids in managing and deploying applications in containers.
  1. Using Terraform to deploy resources within a Kubernetes Cluster.

The above image displays how Kubernetes objects are provisioned using the Kubernetes provider API in Terraform.

  • Here, the DevOps engineer creates the manifest files for the Kubernetes resources, such as pods or secrets, which gives the user complete control over the cluster configuration.
  • The Kubernetes provider is defined in Terraform config files to interact with the Kubernetes API, which helps deploy the Kubernetes resources.
  • After the plan is run, these deployments are tracked and managed in the Terraform state file, which helps manage drifts and audit infrastructure changes.
  • When apply is initiated, Terraform communicates with the Kubernetes API to deploy or update the resources in the Kubernetes cluster.

Developers get confused with the question: Can Kubernetes' manifests replace Terraform?

The answer is no (well, unless you use something like Crossplane), as Kubernetes manifest files define and manage resources within a Kubernetes cluster, like pods, services, and volumes. They only care about the cluster itself. Terraform is a broader tool that can manage a wide range of infrastructure beyond Kubernetes. It can provision cloud resources like virtual machines, networks, storage and even entire Kubernetes clusters (like EKS on AWS). So, while Kubernetes manifests are essential for managing what’s inside a cluster, Terraform handles the bigger picture, the creation of the cluster itself, and other resources.

While Kubernetes and Terraform serve different purposes, they are often used together to manage infrastructure and applications. Many teams use both to manage infrastructure with Terraform and Helm for applications within a Kubernetes cluster.

Comparison of Terraform and Helm

Helm and Terraform are popular tools for automating and managing resources within the kubernetes cluster. However, the way they handle the process is very different. Let’s take a look at some of the differences between them.

Prerequisites

Before we dive into the hands-on tutorial on provisioning your Kubernetes cluster locally and over a public cloud provider, such as AWS, it’s crucial to set up a couple of things on your local machine.

First, you’ll need to install Terraform, which will handle all the automation we need for provisioning infrastructure. Since we’ll be working with AWS for this example, you’ll also need to install and configure the AWS CLI. The AWS CLI is necessary because it allows Terraform to interact with your AWS account locally to deploy the EKS cluster.

Installing Terraform

  1. Go to the official Terraform download page.
  2. On the download page, select your operating system (Windows, macOS, or Linux).
  3. Click on the appropriate download link for your platform. The Terraform binary will be downloaded as a zip file.
  4. Once the download is complete, extract the contents of the zip file.
  5. Move the extracted 'terraform' binary to a directory in your system's PATH for easier access.
  6. Once done, verify the installation, open a new terminal or command prompt, and run the terraform version command:

$ terraform version
Terraform v1.9.2
on windows_386

Your version of Terraform is out of date! The latest version
is 1.9.7. You can update by downloading from https://www.terraform.io/downloads.html
    

Setting up AWS CLI

Next, install and configure the AWS CLI on the machine.

Installing AWS CLI

  1. Download the AWS CLI tool and complete the installation.
  2. Once installed, validate the aws cli version.

$ aws --version
aws-cli/2.17.65 Python/3.12.6 Windows/11 exe/AMD64
    

Configuring the AWS CLI

Now, configure your CLI to access the AWS cloud console.

  1. Go to the AWS Console page and navigate to the Security Credential page.
  2. From there, generate your access keys and copy them. 
  3. Configure your AWS CLI with the access keys, region, and output format to connect to the console.

$ aws configure
AWS Access Key ID: xxxxx
AWS Secret Access Key: xxxxx
Default region name [None]: 
Default output format [None]:
    
  1. Verify your credentials and configuration setup.

$ cd ~/.aws && cat credentials
[default]
aws_access_key_id = xxxxx
aws_secret_access_key = xxxxx
    

Finally, clone this GitHub repository containing the Terraform code used in the hands-on demonstration.

Setting Up Terraform Provider for Kubernetes

This section will teach you how to configure the Kubernetes provider in the Terraform configuration file. 

Adding Configuration Path and Configuration Context

When using a local kubernetes cluster, one of the simplest ways to ensure that Terraform can connect with your cluster is by providing your cluster’s config file as an argument to the provider block. 

Let’s define a Terraform “kubernetes" provider used to interact with the Kubernetes API and manage the resources in your Kubernetes cluster. Add the config path of ~/.kube/config file in the provider configuration, the default location where Kubernetes stores the configuration for your local cluster. This configuration includes the API server address, authentication credentials, and cluster certificates.


provider "kubernetes" {
  config_path = "~/.kube/config"
}
    

Additionally, if you have more than one local cluster, the same information will be stored in your Kubernetes config file, which can cause conflicts and throw errors in Terraform. To avoid this, specify which cluster you want to use in the Kubernetes provider configuration with the config_context argument.


provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "cluster-context"
}
    

Accessing cluster through Connection strings

There are other ways to set up an authentication between your kubernetes cluster and Terraform, such as explicitly providing your keys and cluster API endpoints instead of providing your entire config file. This approach is recommended when working with cloud kubernetes clusters such as AWS EKS or GCP GKE. In this scenario, configure your Kubernetes provider with the host, key, certificate, and cluster certificates.


provider "kubernetes" {
  host = "https://cluster_endpoint:port"

  client_certificate     = file("~/.kube/client-cert.pem")
  client_key             = file("~/.kube/client-key.pem")
  cluster_ca_certificate = file("~/.kube/cluster-ca-cert.pem")
}
    

Here, the host keyword specifies the URL of the Kubernetes API server. It is the endpoint that Terraform will use to connect to your Kubernetes cluster. Other configs, such as the client_certificate, client_key, and cluster_ca_certificate, specify the location of the private certificates so that Terraform can verify the identity of the Kubernetes cluster, ensuring that Terraform is communicating with the correct and trusted cluster.

Instead of naming the paths, you could pass the config through Terraform Variables. First, define variables.tfvars extension file with the host and associate key pairs as variables. Keep in mind, since the variables.tfvars file holds your cluster secrets, so it's important not to expose it to the public, such as pushing them in plain text to Github, similar to how you protect environment variables. These files should be stored in a safe and secure place.


host                   = "https://127.x.x.x:32xxx"
client_certificate     = "LS0tLS1CRUdJTiB..."
client_key             = "LS0tLS1CRUdJTiB..."
cluster_ca_certificate = "LS0tLS1CRUdJTiB..."
    

Now, use these values in your Kubernetes provider configuration.


terraform {
 required_providers {
   kubernetes = {
     source = "hashicorp/kubernetes"
   }
 }
}

variable "host" {
 type = string
}

variable "client_certificate" {
 type = string
}

variable "client_key" {
 type = string
}

variable "cluster_ca_certificate" {
 type = string
}

provider "kubernetes" {
 host = var.host
 client_certificate     = base64decode(var.client_certificate)
 client_key             = base64decode(var.client_key)
 cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
}
    

Creating Your Terraform Configuration for Local Cluster

Since we have understood how we can set up Terraform’s provider for Kubernetes, we focus on creating Kubernetes deployments and resources using Terraform for our local cluster in this section.

How to deploy a YAML file in Kubernetes with Terraform?

There are two ways to deploy the Kubernetes cluster with Terraform. 

The first one is where we have a separate manifest file, which, in this case, is a nginx-manifest.yaml manifest file:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
    

You can refer to this manifest file using the kubernetes_manifest resource type in the provider.tf file:


provider "kubernetes" {
  . . .
}

resource "kubernetes_manifest" "nginx_deployment" {
  manifest = yamldecode(file("${path.module}/nginx-deployment.yaml"))
}
    

The second method uses the kubernetes_deployment resource type in nginx.tf file to deploy nginx, which is generally the preferred way to handle kubernetes deployments that do not require extensive customization flexibility, and it helps you manage everything in one place. It allows you to manage your infrastructure, such as Kubernetes clusters, and the objects or applications running on them, using Terraform configuration to prevent errors due to misconfigured YAML files.


provider "kubernetes" {
  . . .
}

resource "kubernetes_deployment" "nginx" {
  // Configure your kubernetes deployment here
  metadata {
    name = "scalable-nginx-app"
    labels = {
      App = "ScalableNginxApp"
    }
  }

  spec {
    replicas = 2
    selector {
      match_labels = {
        App = "ScalableNginxApp"
      }
    }
  . . .
}
    

For more details, you can check the HashiCorp tutorial.

Terraform Configuration for NGINX deployment on Local Cluster

To begin your deployment on the cluster, you first need to define what you want to set up in the Terraform config.

Here, you will deploy a simple NGINX application on the local or on-premise kubernetes cluster using the Terraform code.

As discussed earlier, configure the config path in the Kubernetes provider block. In this example, we will use the preferred approach with the kubernetes_deployment resource to define the deployment with metadata, such as the app name (scalable-nginx-app) and labels for identification.

The deployment is configured to create two replicas of an NGINX container using the nginx:latest image. Each container listens on port 80 and is assigned resource requests and limits. The containers request a minimum of 250m CPU and 50Mi memory while limiting usage to 500m CPU and 512Mi memory, ensuring both performance and resource efficiency.


provider "kubernetes" {
    config_path = "~/.kube/config"
}

resource "kubernetes_deployment" "nginx" {
  metadata {
    name = "scalable-nginx-app"
    labels = {
      App = "ScalableNginxApp"
    }
  }

  spec {
    replicas = 2
    selector {
      match_labels = {
        App = "ScalableNginxApp"
      }
    }
    template {
      metadata {
        labels = {
          App = "ScalableNginxApp"
        }
      }
      spec {
        container {
          image = "nginx:latest"
          name  = "nginx-app"

          port {
            container_port = 80
          }

          resources {
            limits = {
              cpu    = "500m"
              memory = "512Mi"
            }
            requests = {
              cpu    = "250m"
              memory = "50Mi"
            }
          }
        }
      }
    }
  }
}
    

Deploying NGINX on the Local Kubernetes Cluster

Now, using the above code, initialize the Terraform configuration. It installs the necessary provider plugins, here AWS, kubernetes, and modules:


$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/kubernetes...
- Installing hashicorp/kubernetes v2.33.0...
- Installed hashicorp/kubernetes v2.33.0 (signed by HashiCorp)

Terraform has been successfully initialized!
    

Use the terraform apply command to deploy your infrastructure. Running this command will prompt you to review, approve, or deny the infrastructure changes for the desired setup.


$ terraform apply
Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

kubernetes_deployment. nginx: Creating...
kubernetes_deployment.nginx: Still creating... [10s elapsed]
kubernetes_deployment. nginx: Creation complete after 16s [id=default/scalable-nginx-app]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
    

After the Deployment is complete, you can check your NGINX deployment in the cluster.


$ kubectl get deploy
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
scalable-nginx-app      2/2     2            2           4m55s
    

You can then access the NGINX Welcome page on your browser by port-forwarding the deployed pods:


$ kubectl port-forward deployment/scalable-nginx-app 8080:80
    

To verify that the application is up and running, open your browser and go to localhost:8080. 

Creating Your Terraform Configuration for EKS Cluster

We discussed deploying a simple pod on a Kubernetes cluster using Terraform in the above example. We will now walk through how to deploy your kubernetes cluster on a public cloud provider such as AWS, GCP, Azure, and more. This example uses this Terraform code to use AWS Elastic Kubernetes Service (EKS). For other kubernetes providers, refer to the Terraform provider Documentation.

Terraform Manifest to provision EKS Cluster

Let’s examine the Terraform code to deploy an AWS EKS cluster. First, define and configure the AWS provider block in the provider. tf file. Next, determine the local values used in the resource configuration, such as EKS and VPC. This helps simplify and standardize resource definitions later in the configuration.


locals {
  region = "us-east-1"
  name   = "meteorops-cluster"
  vpc_cidr = "10.123.0.0/16"
  azs      = ["us-east-1a", "us-east-1b"]
  public_subnets  = ["10.123.1.0/24", "10.123.2.0/24"]
  private_subnets = ["10.123.3.0/24", "10.123.4.0/24"]
  intra_subnets   = ["10.123.5.0/24", "10.123.6.0/24"]
  tags = {
    Example = local.name
  }
}

# Provider Block
provider "aws" {
  region = "us-east-1"
}
    

Now, we will define the VPC block in the vpc.tf file using the terraform-aws-modules/vpc/aws module. Let’s look at the module arguments:

  • It divides the VPC's CIDR block into public, private, and intra-subnets. The module enables a NAT gateway for traffic routing from private subnets and tags subnets with specific Kubernetes roles. 
  • Each subnet is tagged based on its role in Kubernetes. Public subnets are designated for external load balancers (ELBs) that handle incoming traffic from outside the VPC, and private subnets are tagged for internal load balancers (ILBs) that manage traffic between services within the cluster. 
  • This tagging ensures proper traffic routing for Kubernetes services.

# VPC Block
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 4.0"

  name = local.name
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = local.private_subnets
  public_subnets  = local.public_subnets
  intra_subnets   = local.intra_subnets

  enable_nat_gateway = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }
}
    

Next, create an EKS cluster config in the eks.tf file using a pre-built module, terraform-aws-modules/eks/aws. These modules in Terraform are reusable pieces of code that simplify the process of setting up resources by quickly configuring resources, such as clusters or networks. 


# EKS Module Declaration Block
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.24.3"

  cluster_name                   = local.name
  cluster_endpoint_public_access = true

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
  }

  vpc_id                   = module.vpc.vpc_id
  subnet_ids               = module.vpc.private_subnets
  control_plane_subnet_ids = module.vpc.intra_subnets

  # EKS Managed Node Group(s)
  eks_managed_node_group_defaults = {
    ami_type       = "AL2_x86_64"
    instance_types = ["m5.large"]

    attach_cluster_primary_security_group = true
  }

  eks_managed_node_groups = {
    amc-cluster-wg = {
      min_size     = 1
      max_size     = 2
      desired_size = 1

      instance_types = ["t3.large"]
      capacity_type  = "SPOT"

      tags = {
        ExtraTag = "owned by MeteorOps"
      }
    }
  }

  tags = local.tags
}
    

Here, the eks module is configured with a custom cluster name (meteorops-cluster), and the cluster_endpoint_public_access argument enables public access to the cluster's endpoint. It also includes the latest versions of essential EKS add-ons like coredns, kube-proxy, and vpc-cni, which handle networking and DNS within the cluster. Arguments like vpc_id link the cluster to a VPC, while subnet arguments like private_subnets and intra_subnets handle communication within the control plane.

For the worker nodes, the eks_managed_node_group_defaults argument sets the default configuration for the EKS-managed node groups. Additionally, the attach_cluster_primary_security_group is set to true, which ensures that the primary security group is attached to the nodes, providing basic network access controls. Remember that these configs are just defaults, and you can change them for a specific node group if needed.

In the eks_managed_node_groups section, the amc-cluster-wg node group is initially set to have one worker node. It can scale up or down, with a minimum of one node and a maximum of two nodes allowed. It uses t3.large instance types, and to reduce costs, the capacity_type is set to SPOT. This means the nodes will use AWS spot instances, which are cheaper as they are terminated by AWS if the instance is no longer needed.

Deploying the EKS Cluster with Terraform

By now, we have seen how you can deploy Kubernetes manifests using Terraform and deploy resources with kubernetes_deployment on a local Kubernetes cluster. In this section, we will provision an EKS cluster with Terraform. We will use the terraform init command to install provider plugins such as AWS, cloudinit, kubernetes, and time modules to initialize the EKS Cluster folder for Terraform runtime:


$ terraform init
Initializing the backend...
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 20.24.3 for eks...
- eks in .terraform/modules/eks
- eks.eks_managed_node_group in .terraform/modules/eks/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
- eks.fargate_profile in .terraform/modules/eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 2.1.0 for eks.kms...
- eks.kms in .terraform/modules/eks.kms
- eks.self_managed_node_group in .terraform/modules/eks/modules/self-managed-node-group
- eks.self_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 4.0.2 for vpc...
- vpc in .terraform/modules/vpc
Initializing provider plugins...
- Finding hashicorp/tls versions matching ">= 3.0.0"...
- Finding hashicorp/time versions matching ">= 0.9.0"...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Finding hashicorp/null versions matching ">= 3.0.0"...
- Finding hashicorp/aws versions matching ">= 4.33.0, >= 4.35.0, >= 5.61.0"...
- Installing hashicorp/tls v4.0.6...
- Installed hashicorp/tls v4.0.6 (signed by HashiCorp)
- Installing hashicorp/time v0.12.1...
- Installed hashicorp/time v0.12.1 (signed by HashiCorp)
- Installing hashicorp/cloudinit v2.3.5...
- Installed hashicorp/cloudinit v2.3.5 (signed by HashiCorp)
- Installing hashicorp/null v3.2.3...
- Installed hashicorp/null v3.2.3 (signed by HashiCorp)
- Installing hashicorp/aws v5.71.0...
- Installed hashicorp/aws v5.71.0 (signed by HashiCorp)
. . .

Terraform has been successfully initialized!
    

Now, we will use the apply command to deploy the EKS infrastructure and after the cluster is deployed.


$ terraform apply
Plan: 63 to add, 0 to change, 0 to destroy.
╷
│ Warning: Argument is deprecated
│
│   with module.vpc.aws_eip.nat,
│   on .terraform\modules\vpc\main.tf line 1044, in resource "aws_eip" "nat":
│ 1044:   vpc = true
│
│ use domain attribute instead
│
│ (and 2 more similar warnings elsewhere)
╵

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.eks.aws_cloudwatch_log_group.this[0]: Creating...
module.vpc.aws_eip.nat[1]: Creating...
module.eks.module.eks_managed_node_group["amc-cluster-wg"].aws_iam_role.this[0]: Creating...
module.vpc.aws_vpc.this[0]: Creating...
module.eks.aws_iam_role.this[0]: Creating...
module.vpc.aws_eip.nat[0]: Creating...
. . .
Apply complete! Resources: 63 added, 0 changed, 0 destroyed.
    

Now, you can verify that your EKS Cluster is up and running via AWS Console.

Managing the Resource State on your Kubernetes Cluster

You can use the terraform state list command to display all the resources Terraform manages in the current state file. When executed, it provides a concise list of resources that Terraform tracks, including their names and types.

Let’s run it for the local kubernetes cluster. It will show something like this.


$ terraform state list
kubernetes_deployment.nginx
    

If you run the state list command for your EKS cluster, it will display the resources like so:


$ terraform state list
module.eks.data.aws_caller_identity.current
module.eks.data.aws_eks_addon_version.this["coredns"]
module.eks.data.aws_eks_addon_version.this["kube-proxy"]
module.eks.data.aws_eks_addon_version.this["vpc-cni"]
module.eks.data.aws_iam_policy_document.assume_role_policy[0]
module.eks.data.aws_iam_session_context.current
module.eks.data.aws_partition.current
module.eks.data.tls_certificate.this[0]
module.eks.aws_cloudwatch_log_group.this[0]
module.eks.aws_ec2_tag.cluster_primary_security_group["Example"]
module.eks.aws_eks_addon.this["coredns"]
module.eks.aws_eks_addon.this["kube-proxy"]
module.eks.aws_eks_addon.this["vpc-cni"]
module.eks.aws_eks_cluster.this[0]
module.eks.aws_iam_openid_connect_provider.oidc_provider[0]
module.eks.aws_iam_policy.cluster_encryption[0]
module.eks.aws_iam_role.this[0]
module.eks.aws_iam_role_policy_attachment.cluster_encryption[0]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSClusterPolicy"]
module.eks.aws_iam_role_policy_attachment.this["AmazonEKSVPCResourceController"]
    

To Recap

Terraform is helpful to deploy Kubernetes resources when:

  • You want a structured way of setting up, maintaining, and scaling infrastructure.
  • When automating deployments, you prioritize reducing and consolidating manual errors and ensuring configuration consistency.
  • You must manage local Kubernetes clusters and cloud-based stuff such as AWS EKS.

The way to use it involves:

  • Determining the desired state of the infrastructure and writing down a configuration file to express how this state is achieved.
  • State management can be used through Terraform’s state features to make changes and rollbacks easily.
  • Writing reusable modules to package configuration for sharing and applying them to other projects.
  • Using variables on the modules and even combining them with the cloud providers using Terraform’s built-in providers.

The benefits of using it are:

  • Automated, reduced manual errors.
  • Environments as consistent as possible in configurations.
  • State Management to simplify the updates and rollbacks.
  • The reusable modules keep things standardized and help save time.

The risks of using it in the wrong place are:

  • Terraform may not be suited for dynamic workloads requiring frequent change.
  • If you depend solely on Terraform for application layer resource management, you are likely to make deployments better suited to tools like Helm.

Terraform is a good fit if we are talking about managing Infrastructure-level components in Kubernetes, specifically Cluster provisioning, networking, and Cloud provider integrations. It is, however, not ideal for running dynamic application layer resources in Kubernetes. It’s the application of Kubernetes-native tools like kubectl, Helm, or GitOps to manage these better. So, try to avoid using Terraform for application-specific deployments within Kubernetes.