intermediate terraform · 45 minutes
Deploy an EKS Cluster with Terraform
Step-by-step tutorial to provision a production-ready EKS cluster on AWS using Terraform modules with networking and node groups.
Prerequisites
- Basic Terraform knowledge
- AWS account with admin access
- kubectl installed
Tools Used
terraformkubectlaws-cli
Advertisement
This tutorial walks you through deploying a production-ready Amazon EKS cluster using Terraform.
Step 1: Set Up the Project Structure
Create a new Terraform project:
mkdir eks-cluster && cd eks-cluster
touch main.tf variables.tf outputs.tf
Step 2: Configure the AWS Provider
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = var.region
}
variable "region" {
default = "us-west-2"
}
variable "cluster_name" {
default = "my-eks-cluster"
}
Step 3: Create the VPC
EKS needs a VPC with public and private subnets:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "${var.cluster_name}-vpc"
cidr = "10.0.0.0/16"
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
}
Step 4: Deploy the EKS Cluster
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = var.cluster_name
cluster_version = "1.30"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
default = {
instance_types = ["t3.medium"]
min_size = 2
max_size = 4
desired_size = 2
}
}
}
Step 5: Apply and Connect
terraform init
terraform plan
terraform apply
aws eks update-kubeconfig --name my-eks-cluster --region us-west-2
kubectl get nodes
You should see your two worker nodes in Ready state.
Cleanup
To avoid ongoing costs:
terraform destroy
Troubleshooting
Nodes not joining the cluster? Check that the node group subnets have NAT gateway access and the correct IAM roles are attached.
kubectl connection refused? Run aws eks update-kubeconfig again and verify your AWS credentials.
Advertisement
Stay up to date
Get DevOps tips, tutorials, and guides delivered to your inbox.