Using Terraform to Deploy to EKS

Author Lukonde Mwila

Last updated 2 May, 2023

14 mins read

Using Terraform to Deploy to EKS

Historically, companies have been restricted to manual solutions for maintaining IT infrastructure—but Infrastructure as Code (IaC) offers a different solution.

With the rise of cloud and automation technology, as well as DevOps practices, it’s become simpler, more efficient, and reliable to provision complex infrastructure. Cloud providers like Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure allow APIs to facilitate automation of resources in their environments. Your application’s underlying infrastructure and configuration can now be automated and versioned using code.

IaC enables writing and executing code to define, deploy, update, and destroy infrastructure. This model treats all aspects of operations as software—even those that represent hardware. Using IaC to define application environments, companies will be better equipped to eliminate the risks of configuration drift and accomplish more reliable outcomes in their architectures.

An increasingly popular IaC tool is Terraform. Terraform is an open-source, cloud-agnostic provisioning tool used to build, change, and version infrastructure safely and efficiently. It’s especially useful for provisioning complex platforms like Kubernetes clusters, which have been central to the increased adoption of cloud-native solutions.

Cloud providers like AWS have created managed services, like Amazon EKS (Amazon Elastic Container Service for Kubernetes), to reduce the complexity of cluster management. AWS is responsible for provisioning, running, managing, and auto-scaling the master and etcd nodes across multiple AWS Availability Zones (AZs), enabling high availability. Users are responsible for adding and managing the EC2 worker nodes—unless they opt for the Fargate serverless engine.

Amazon EKS clusters run within Amazon Virtual Private Clouds (VPCs). To communicate with the cluster, it needs to be configured for public endpoint access control, private endpoint access control, or both.

In this tutorial, you’ll learn how to deploy a Kubernetes cluster to EKS using Terraform. We’ll break down the benefits and disadvantages of using Terraform for this purpose, as well as how it differs from native Kubernetes cluster deployment. In addition to that, this post will briefly touch on kOps and how it can be used to generate Terraform source for Kubernetes cluster provisioning.

Lastly, you will be introduced to CloudForecast’s Barometer, which is used to manage, monitor, and optimize the costs of running an EKS cluster.


⚓️⚓️⚓️ Check out our other Kubernetes guides:


The Advantages of Using Terraform

Using Terraform to provision infrastructure has a number of benefits for EKS clusters—as well as any other resources managed through IaC.

Automated Deployments

With your infrastructure defined as code, you can automate the provisioning of your infrastructure using CI/CD practices (the same way you would with application source code).

CI/CD pipelines eliminate the need for manual deployments by specific personnel. Instead, more people can be empowered to trigger pipelines—without the need for specialist knowledge around manual steps.

In addition to this, your IaC can go through familiar quality checks like pull requests for changes to the source code, as well as automated testing in the CI stage of the pipeline.

Efficiency and Reliability

Infrastructure changes are much faster with an automated deployment process. Terraform runs underlying API requests in the correct sequence to execute the desired state, with the ability to specify prerequisite dependencies for resources.

Repeatability

Terraform’s declarative model to optimize infrastructure resources makes the entire provisioning lifecycle repeatable for software teams with IaC access.

Versioning

Infrastructure in the form of code enables software teams to practice discrete versioning as they typically would—in line with best practices of software development. This allows teams to maintain snapshots of versions as infrastructure architecture changes, as well as perform rollbacks in the case instability occurs in the provisioning process.

Documentation

Another benefit of IaC—and Terraform in particular—is its relatively simplistic, readable style when declaring resources. As a result, code becomes a form of documentation that enables various stakeholders in teams to understand the entire infrastructure landscape.

The Disadvantage of Using Terraform

Despite the benefits of Terraform as a provisioning tool, it falls short when it comes to configuration management. In contrast, configuration management tools like Chef, Puppet, Ansible, and SaltStack are all designed to install and manage software on existing servers. Terraform can be used in conjunction with tools like this to create a better provisioning and configuration lifecycle experience.

Provisioning an EKS Cluster Using Terraform

In this section, you will provision an EKS cluster using Terraform. The steps below will outline all the resources that need to be created (including variables). To get the most out of this tutorial, clone the repository with all of the IaC from here.

To proceed, you’ll need the following prerequisites:

Networking Infrastructure with VPC

Your EKS cluster will be deployed into an isolated network in your AWS account, known as a VPC. VPCs typically consist of multiple network components that fulfil various roles. In this setup, the following network resources will be created and configured:

  • VPC: This lets you provision a logically isolated section of the AWS cloud to launch your resources in a virtual network that you define.
  • CIDR (Classless Inter-domain Routing): This is used for VPC addressing.
  • Subnets: These are subdivisions of the VPC address range to enable different resources like EC2 instances, RDS databases, or load balancers to be deployed. This lets you control whether instances are reachable over the internet (whether they’re public or private). In this setup, you’ll create both public and private subnets for the EKS node groups.
  • Route Tables: This component makes the routing decisions—how traffic flows within the VPC, the internet, and other networks. VPCs come with a default main Route Table, which you use to control traffic. In this solution, you’ll create a custom Route Table to route ingress and egress traffic between the private subnet nodes, as well as the NAT Gateway in the public subnet.
  • Security Groups: These are virtual firewalls operating at an instance level. One Security Group can be attached to multiple instances. By default, they allow all outbound traffic and no inbound traffic.
  • NAT Gateway: This is a network component used to provide a secure means of ingress traffic from the Internet to resources deployed in a private subnet.

VPC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# VPC Network Setup
resource "aws_vpc" "custom_vpc" {
  # Your VPC must have DNS hostname and DNS resolution support. 
  # Otherwise, your worker nodes cannot register with your cluster. 

  cidr_block       = var.vpc_cidr_block
  enable_dns_support = true
  enable_dns_hostnames = true

  tags = {
    Name = "${var.vpc_tag_name}"
    "kubernetes.io/cluster/${var.eks_cluster_name}" = "shared"
  }
}

Subnets, Internet Gateway, NAT Gateway, and Custom Route Table

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# Create the private subnet
resource "aws_subnet" "private_subnet" {
  count = length(var.availability_zones)
  vpc_id            = aws_vpc.custom_vpc.id
  cidr_block = element(var.private_subnet_cidr_blocks, count.index)
  availability_zone = element(var.availability_zones, count.index)

  tags = {
    Name = var.private_subnet_tag_name
    "kubernetes.io/cluster/${var.eks_cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb" = 1
  }
}

# Create the public subnet
resource "aws_subnet" "public_subnet" {
  count = length(var.availability_zones)
  vpc_id            = "${aws_vpc.custom_vpc.id}"
  cidr_block = element(var.public_subnet_cidr_blocks, count.index)
  availability_zone = element(var.availability_zones, count.index)

  tags = {
    Name = "${var.public_subnet_tag_name}"
    "kubernetes.io/cluster/${var.eks_cluster_name}" = "shared"
    "kubernetes.io/role/elb" = 1
  }

  map_public_ip_on_launch = true
}

# Create IGW for the public subnets
resource "aws_internet_gateway" "igw" {
  vpc_id = "${aws_vpc.custom_vpc.id}"

  tags = {
    Name = "${var.vpc_tag_name}"
  }
}

# Route the public subnet traffic through the IGW
resource "aws_route_table" "main" {
  vpc_id = "${aws_vpc.custom_vpc.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.igw.id}"
  }

  tags = {
    Name = "${var.route_table_tag_name}"
  }
}

# Route table and subnet associations
resource "aws_route_table_association" "internet_access" {
  count = length(var.availability_zones)
  subnet_id      = "${aws_subnet.public_subnet[count.index].id}"
  route_table_id = "${aws_route_table.main.id}"
}

# Create Elastic IP
resource "aws_eip" "main" {
  vpc              = true
}

# Create NAT Gateway
resource "aws_nat_gateway" "main" {
  allocation_id = aws_eip.main.id
  subnet_id     = aws_subnet.public_subnet[0].id

  tags = {
    Name = "NAT Gateway for Custom Kubernetes Cluster"
  }
}

# Add route to route table
resource "aws_route" "main" {
  route_table_id            = aws_vpc.custom_vpc.default_route_table_id
  destination_cidr_block    = "0.0.0.0/0"
  nat_gateway_id = aws_nat_gateway.main.id
}

Security Groups

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
# Security group for public subnet resources
resource "aws_security_group" "public_sg" {
  name   = "public-sg"
  vpc_id = aws_vpc.custom_vpc.id

  tags = {
    Name = "public-sg"
  }
}

# Security group traffic rules
## Ingress rule
resource "aws_security_group_rule" "sg_ingress_public_443" {
  security_group_id = aws_security_group.public_sg.id
  type              = "ingress"
  from_port         = 443
  to_port           = 443
  protocol          = "tcp"
  cidr_blocks = ["0.0.0.0/0"]
}

resource "aws_security_group_rule" "sg_ingress_public_80" {
  security_group_id = aws_security_group.public_sg.id
  type              = "ingress"
  from_port         = 80
  to_port           = 80
  protocol          = "tcp"
  cidr_blocks = ["0.0.0.0/0"]
}

## Egress rule
resource "aws_security_group_rule" "sg_egress_public" {
  security_group_id = aws_security_group.public_sg.id
  type              = "egress"
  from_port   = 0
  to_port     = 0
  protocol    = "-1"
  cidr_blocks = ["0.0.0.0/0"]
}

# Security group for data plane
resource "aws_security_group" "data_plane_sg" {
  name   = "k8s-data-plane-sg"
  vpc_id = aws_vpc.custom_vpc.id

  tags = {
    Name = "k8s-data-plane-sg"
  }
}

# Security group traffic rules
## Ingress rule
resource "aws_security_group_rule" "nodes" {
  description              = "Allow nodes to communicate with each other"
  security_group_id = aws_security_group.data_plane_sg.id
  type              = "ingress"
  from_port   = 0
  to_port     = 65535
  protocol    = "-1"
  cidr_blocks = flatten([var.private_subnet_cidr_blocks, var.public_subnet_cidr_blocks])
}

resource "aws_security_group_rule" "nodes_inbound" {
  description              = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
  security_group_id = aws_security_group.data_plane_sg.id
  type              = "ingress"
  from_port   = 1025
  to_port     = 65535
  protocol    = "tcp"
  cidr_blocks = flatten([var.private_subnet_cidr_blocks])
}

## Egress rule
resource "aws_security_group_rule" "node_outbound" {
  security_group_id = aws_security_group.data_plane_sg.id
  type              = "egress"
  from_port   = 0
  to_port     = 0
  protocol    = "-1"
  cidr_blocks = ["0.0.0.0/0"]
}

# Security group for control plane
resource "aws_security_group" "control_plane_sg" {
  name   = "k8s-control-plane-sg"
  vpc_id = aws_vpc.custom_vpc.id

  tags = {
    Name = "k8s-control-plane-sg"
  }
}

# Security group traffic rules
## Ingress rule
resource "aws_security_group_rule" "control_plane_inbound" {
  security_group_id = aws_security_group.control_plane_sg.id
  type              = "ingress"
  from_port   = 0
  to_port     = 65535
  protocol          = "tcp"
  cidr_blocks = flatten([var.private_subnet_cidr_blocks, var.public_subnet_cidr_blocks])
}

## Egress rule
resource "aws_security_group_rule" "control_plane_outbound" {
  security_group_id = aws_security_group.control_plane_sg.id
  type              = "egress"
  from_port   = 0
  to_port     = 65535
  protocol    = "-1"
  cidr_blocks = ["0.0.0.0/0"]
}

Variables for Network Resources

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
variable "eks_cluster_name" {
  description = "The name of the EKS cluster"
  type = string
}

variable "vpc_tag_name" {
  type        = string
  description = "Name tag for the VPC"
}

variable "route_table_tag_name" {
  type        = string
  default     = "main"
  description = "Route table description"
}

variable "vpc_cidr_block" {
  type        = string
  default     = "10.0.0.0/16"
  description = "CIDR block range for vpc"
}

variable "private_subnet_cidr_blocks" {
  type        = list(string)
  default     = ["10.0.0.0/24", "10.0.1.0/24"]
  description = "CIDR block range for the private subnet"
}

variable "public_subnet_cidr_blocks" {
  type = list(string)
  default     = ["10.0.2.0/24", "10.0.3.0/24"]
  description = "CIDR block range for the public subnet"
}

variable "private_subnet_tag_name" {
  type        = string
  default = "Custom Kubernetes cluster private subnet"
  description = "Name tag for the private subnet"
}

variable "public_subnet_tag_name" {
  type        = string
  default = "Custom Kubernetes cluster public subnet"
  description = "Name tag for the public subnet"
}

variable "availability_zones" {
  type  = list(string)
  default = ["eu-west-1a", "eu-west-1b"]
  description = "List of availability zones for the selected region"
}

variable "region" {
  description = "aws region to deploy to"
  type        = string
}

EKS Cluster with Managed Node Groups

The following resource declarations are specific to provisioning the EKS cluster and managed node groups. In addition to this, you’ll have to create IAM roles with relevant policy permissions to allow the different interacting services to carry out the correct actions.

EKS Cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
resource "aws_eks_cluster" "main" {
  name     = var.eks_cluster_name
  role_arn = aws_iam_role.eks_cluster.arn

  vpc_config {
    security_group_ids      = [aws_security_group.eks_cluster.id, aws_security_group.eks_nodes.id]
    endpoint_private_access = var.endpoint_private_access
    endpoint_public_access  = var.endpoint_public_access
    subnet_ids = var.eks_cluster_subnet_ids
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
  # Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
  depends_on = [
    aws_iam_role_policy_attachment.aws_eks_cluster_policy
  ]
}

EKS Cluster IAM Role

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html

resource "aws_iam_role" "eks_cluster" {
  name = "${var.eks_cluster_name}-cluster"

  assume_role_policy = <

EKS Cluster Security Group

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
resource "aws_security_group" "eks_cluster" {
  name        = var.cluster_sg_name
  description = "Cluster communication with worker nodes"
  vpc_id      = var.vpc_id

  tags = {
    Name = var.cluster_sg_name
  }
}

resource "aws_security_group_rule" "cluster_inbound" {
  description              = "Allow worker nodes to communicate with the cluster API Server"
  from_port                = 443
  protocol                 = "tcp"
  security_group_id        = aws_security_group.eks_cluster.id
  source_security_group_id = aws_security_group.eks_nodes.id
  to_port                  = 443
  type                     = "ingress"
}

resource "aws_security_group_rule" "cluster_outbound" {
  description              = "Allow cluster API Server to communicate with the worker nodes"
  from_port                = 1024
  protocol                 = "tcp"
  security_group_id        = aws_security_group.eks_cluster.id
  source_security_group_id = aws_security_group.eks_nodes.id
  to_port                  = 65535
  type                     = "egress"
}

EKS Node Groups

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# Nodes in private subnets
resource "aws_eks_node_group" "main" {
  cluster_name    = aws_eks_cluster.main.name
  node_group_name = var.node_group_name
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = var.private_subnet_ids

  ami_type       = var.ami_type
  disk_size      = var.disk_size
  instance_types = var.instance_types

  scaling_config {
    desired_size = var.pvt_desired_size
    max_size     = var.pvt_max_size
    min_size     = var.pvt_min_size
  }

  tags = {
    Name = var.node_group_name
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.aws_eks_worker_node_policy,
    aws_iam_role_policy_attachment.aws_eks_cni_policy,
    aws_iam_role_policy_attachment.ec2_read_only,
  ]
}

# Nodes in public subnet
resource "aws_eks_node_group" "public" {
  cluster_name    = aws_eks_cluster.main.name
  node_group_name = "${var.node_group_name}-public"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = var.public_subnet_ids

  ami_type       = var.ami_type
  disk_size      = var.disk_size
  instance_types = var.instance_types

  scaling_config {
    desired_size = var.pblc_desired_size
    max_size     = var.pblc_max_size
    min_size     = var.pblc_min_size
  }

  tags = {
    Name = "${var.node_group_name}-public"
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.aws_eks_worker_node_policy,
    aws_iam_role_policy_attachment.aws_eks_cni_policy,
    aws_iam_role_policy_attachment.ec2_read_only,
  ]
}

EKS Node IAM Role

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#https://docs.aws.amazon.com/eks/latest/userguide/worker_node_IAM_role.html

resource "aws_iam_role" "eks_nodes" {
  name                 = "${var.eks_cluster_name}-worker"

  assume_role_policy = data.aws_iam_policy_document.assume_workers.json
}

data "aws_iam_policy_document" "assume_workers" {
  statement {
    effect = "Allow"

    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

resource "aws_iam_role_policy_attachment" "aws_eks_worker_node_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "aws_eks_cni_policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "ec2_read_only" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_nodes.name
}

EKS Node Security Group

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
resource "aws_security_group" "eks_nodes" {
  name        = var.nodes_sg_name
  description = "Security group for all nodes in the cluster"
  vpc_id      = var.vpc_id

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name                                        = var.nodes_sg_name
    "kubernetes.io/cluster/${var.eks_cluster_name}" = "owned"
  }
}

resource "aws_security_group_rule" "nodes" {
  description              = "Allow nodes to communicate with each other"
  from_port                = 0
  protocol                 = "-1"
  security_group_id        = aws_security_group.eks_nodes.id
  source_security_group_id = aws_security_group.eks_nodes.id
  to_port                  = 65535
  type                     = "ingress"
}

resource "aws_security_group_rule" "nodes_inbound" {
  description              = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
  from_port                = 1025
  protocol                 = "tcp"
  security_group_id        = aws_security_group.eks_nodes.id
  source_security_group_id = aws_security_group.eks_cluster.id
  to_port                  = 65535
  type                     = "ingress"
}

Variables for EKS Resources

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
variable "eks_cluster_name" {
  description = "The name of the EKS cluster"
  type = string
}

variable "node_group_name" {
  description = "Name of the Node Group"
  type = string
}


variable "endpoint_private_access" {
  type = bool
  default = true
  description = "Indicates whether or not the Amazon EKS private API server endpoint is enabled."
}

variable "endpoint_public_access" {
  type = bool
  default = true
  description = "Indicates whether or not the Amazon EKS public API server endpoint is enabled."
}

variable "eks_cluster_subnet_ids" {
  type = list(string)
  description = "List of subnet IDs. Must be in at least two different availability zones. Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane."
}

variable "private_subnet_ids" {
  type = list(string)
  description = "List of private subnet IDs."
}

variable "public_subnet_ids" {
  type = list(string)
  description = "List of public subnet IDs."
}

variable "ami_type" {
  description = "Type of Amazon Machine Image (AMI) associated with the EKS Node Group. Defaults to AL2_x86_64. Valid values: AL2_x86_64, AL2_x86_64_GPU."
  type = string 
  default = "AL2_x86_64"
}

variable "disk_size" {
  description = "Disk size in GiB for worker nodes. Defaults to 20."
  type = number
  default = 20
}

variable "instance_types" {
  type = list(string)
  default = ["t3.medium"]
  description = "Set of instance types associated with the EKS Node Group."
}

variable "pvt_desired_size" {
  description = "Desired number of worker nodes in private subnet"
  default = 1
  type = number
}

variable "pvt_max_size" {
  description = "Maximum number of worker nodes in private subnet."
  default = 1
  type = number
}

variable "pvt_min_size" {
  description = "Minimum number of worker nodes in private subnet."
  default = 1
  type = number
}

variable "pblc_desired_size" {
  description = "Desired number of worker nodes in public subnet"
  default = 1
  type = number
}

variable "pblc_max_size" {
  description = "Maximum number of worker nodes in public subnet."
  default = 1
  type = number
}

variable "pblc_min_size" {
  description = "Minimum number of worker nodes in public subnet."
  default = 1
  type = number
}

variable cluster_sg_name {
  description = "Name of the EKS cluster Security Group"
  type        = string
}

variable nodes_sg_name {
  description = "Name of the EKS node group Security Group"
  type        = string
}

variable vpc_id {
  description = "VPC ID from which belongs the subnets"
  type        = string
}

Provision Infrastructure

Review the main.tf in the root directory of the repository and update the node size configurations (desired, maximum, and minimum) based on your personal requirements. When you’re ready, run the following commands:

  1. terraform init: Initialize the project, set up the state persistence (local or remote), and download the API plugins.
  2. terraform plan: Print the plan of the desired state—without changing the state.
  3. terraform apply: Print the desired state of infrastructure changes with the option to execute the plan and provision.

Terraform Apply Command - EKS Cluster

EKS Cluster

Connecting the Cluster

Using the same AWS account profile that provisioned the infrastructure, you can connect to your cluster by updating your local kubeconfig:

1
aws eks --region  update-kubeconfig --name 

Mapping IAM Users and Roles to the EKS Cluster

If you want to map additional IAM users or roles to your Kubernetes cluster, you’ll have to update the aws-auth ConfigMap by adding the respective ARN and a Kubernetes username value to the mapRole, or mapUser property as an array item:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam:::role/
      username: system:node:
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam:::role/ops-role
      username: ops-role
  mapUsers: |
    - userarn: arn:aws:iam:::user/developer-user
      username: developer-user

When you’re done with modifications to the aws-auth ConfigMap, you can run kubectl apply -f auth-auth.yaml. An example of this manifest file exists in the raw-manifests directory.

For a more detailed explanation, check out “Secure an Amazon KES Cluster with IAM & RBAC” on Medium.

Deploy Application

To deploy a simple application to you cluster, redirect to the directory called raw-manifests and apply the pod.yaml and service.yaml manifest files to create a Pod. Then, expose the application with a LoadBalancer Service:

  1. kubectl apply -f service.yaml
  2. kubectl apply -f pod.yaml

Create Kubernetes resources

Application Running

Bonus from kOps to Infrastructure as Code

Apart from Terraform, kOps is a tool that makes it easy to provision production grade Kubernetes clusters. Also, kOps comes with the added benefit of cloud infrastructure provisioning capabilities.

If using Terraform is a core part of your development lifecycle, you can make use of kOps to generate Terraform source code for provisioning a Kubernetes cluster in AWS.

1
2
3
4
5
6
7
kops create cluster 
  --name=kubernetes.mydomain.com 
  --state=s3://mycompany.kubernetes 
  --dns-zone=kubernetes.mydomain.com 
  [... your other options ...]
  --out=. 
  --target=terraform

The above command will generate a kOps state file in S3 and output a representation of your configuration into Terraform files, which can then be used to create resources.

Conclusion

A managed Kubernetes cluster like EKS eliminates the complexity and overhead for software teams to provision and optimize control planes. Automation technology like IaC enhances the infrastructure management lifecycle with a host of benefits. However, Kubernetes cluster management in AWS can have significant cost implications if there are no tools in place to provide relevant insight, monitoring, and reporting for optimal usage.

CloudForecast’s Barometer helps companies and software teams by providing daily monitoring reports, anomaly cost detection, applying granular cost allocations, and visibility into cluster CPU and Memory usage in relation to your AWS cost.

Author Lukonde Mwila
Lukonde is a Principal Technical Evangelist at SUSE and is an AWS Container Hero. He specializes in cloud and DevOps engineering and cloud-native technologies. He is passionate about sharing knowledge through various mediums and engaging with the developer community at large.

Manage, track, and report your AWS spending in seconds — not hours

CloudForecast’s focused daily AWS cost monitoring reports to help busy engineering teams understand their AWS costs, rapidly respond to any overspends, and promote opportunities to save costs.

Monitor & Manage AWS Cost in Seconds — Not Hours

CloudForecast makes the tedious work of AWS cost monitoring less tedious.

AWS cost management is easy with CloudForecast

We would love to learn more about the problems you are facing around AWS cost. Connect with us directly and we’ll schedule a time to chat!

AWS daily cost reports