Skip to content

This repository contains a Terraform module for deploying an Amazon EKS cluster on AWS as part of the GlueOps platform. It facilitates setting up VPCs, subnets, EKS clusters, node pools, and the necessary AWS resources for Kubernetes cluster deployment. It includes configurations for addons like CoreDNS and kube-proxy, and supports VPC peering.

Notifications You must be signed in to change notification settings

GlueOps/terraform-module-cloud-aws-kubernetes-cluster

Repository files navigation

terraform-module-cloud-aws-kubernetes-cluster

This terraform module is to help you quickly deploy a EKS cluster on Amazon Web Services (AWS). This is part of the opionated GlueOps Platform. If you came here directly then you should probably visit https://github.com/glueops/admiral as that is the start point.

Prerequisites to use this Terraform module

  • A Dedicated AWS Sub account
  • Service account with environment variable set
  • Service Quotas (Depending on Cluster Size)

For more details see: https://github.com/GlueOps/terraform-module-cloud-aws-kubernetes-cluster/wiki/

Example usage of module

module "captain" {
  iam_role_to_assume = "arn:aws:iam::1234567890:role/glueops-captain-role"
  source             = "git::https://github.com/GlueOps/terraform-module-cloud-aws-kubernetes-cluster.git"
  eks_version        = "1.30"
  csi_driver_version = "v1.37.0-eksbuild.1"
  coredns_version    = "v1.11.3-eksbuild.2"
  kube_proxy_version = "v1.30.6-eksbuild.3"
  vpc_cidr_block     = "10.65.0.0/26"
  region             = "us-west-2"
  availability_zones = ["us-west-2a", "us-west-2b"]
  node_pools = [
#    {
#      "kubernetes_version" : "1.30",
#      "ami_release_version" : "1.30.6-20241121",
#      "ami_type" : "AL2_x86_64",
#      "instance_type" : "t3a.large",
#      "name" : "glueops-platform-node-pool-1",
#      "node_count" : 4,
#      "spot" : false,
#      "disk_size_gb" : 20,
#      "max_pods" : 110,
#      "ssh_key_pair_names" : [],
#      "kubernetes_labels" : {
#        "glueops.dev/role" : "glueops-platform"
#      },
#      "kubernetes_taints" : [
#        {
#          key    = "glueops.dev/role"
#          value  = "glueops-platform"
#          effect = "NO_SCHEDULE"
#        }
#      ]
#    },
#    {
#      "kubernetes_version" : "1.30",
#      "ami_release_version" : "1.30.6-20241121",
#      "ami_type" : "AL2_x86_64",
#      "instance_type" : "t3a.small",
#      "name" : "glueops-platform-node-pool-argocd-app-controller-1",
#      "node_count" : 2,
#      "spot" : false,
#      "disk_size_gb" : 20,
#      "max_pods" : 110,
#      "ssh_key_pair_names" : [],
#      "kubernetes_labels" : {
#        "glueops.dev/role" : "glueops-platform-argocd-app-controller"
#      },
#      "kubernetes_taints" : [
#        {
#          key    = "glueops.dev/role"
#          value  = "glueops-platform-argocd-app-controller"
#          effect = "NO_SCHEDULE"
#        }
#      ]
#    },
#    {
#      "kubernetes_version" : "1.30",
#      "ami_release_version" : "1.30.6-20241121",
#      "ami_type" : "AL2_x86_64",
#      "instance_type" : "t3a.medium",
#      "name" : "clusterwide-node-pool-1",
#      "node_count" : 2,
#      "spot" : false,
#      "disk_size_gb" : 20,
#      "max_pods" : 110,
#      "ssh_key_pair_names" : [],
#      "kubernetes_labels" : {},
#      "kubernetes_taints" : []
#    }
  ]
  peering_configs = [
#    {
#    vpc_peering_connection_id = "pcx-0df92b5241651ba92"
#    destination_cidr_block = "10.69.0.0/26"
#    }
  ]
}

VPC Peering

This terraform module expects only to be an accepter VPC. This means a VPC peering request must come from the requesting account. As an accepter VPC you must provide the requester your VPC ID, your AWS Account ID (The subaccount being used for the cluster deployment), and the VPC CIDR you configured for the cluster deployment.

When providing them with the above, please ask them to enable DNS resolution of hosts within the requester VPC.

EFS/NFS Example Manifest

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-test
spec:
  storageClassName: efs-fun-test
  capacity:
    storage: 1000Gi # Adjust based on your needs
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
      - timeo=600
      - retrans=2
      - nfsvers=4.1
      - rsize=1048576
      - wsize=1048576
      - noresvport
      - hard
  nfs:
    path: /
    server: nfs.nonprod.antoniostacos.onglueops.com
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-fun-test
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: nginx
      volumeMounts:
        - name: my-volume
          mountPath: /mnt/data  # Mount path within the container
          subPath: pod1-fun
  volumes:
    - name: my-volume
      persistentVolumeClaim:
        claimName: my-pvc  # Name of the PVC to be mounted

Requirements

No requirements.

Providers

Name Version
aws n/a

Modules

Name Source Version
kubernetes cloudposse/eks-cluster/aws 3.0.0
node_pool cloudposse/eks-node-group/aws 3.1.1
subnets cloudposse/dynamic-subnets/aws 2.4.2
vpc cloudposse/vpc/aws 2.2.0
vpc_peering_accepter_with_routes ./modules/vpc_peering_accepter_with_routes n/a

Resources

Name Type
aws_eks_addon.coredns resource
aws_eks_addon.ebs_csi resource
aws_eks_addon.kube_proxy resource
aws_iam_role.eks_addon_ebs_csi_role resource
aws_iam_role_policy_attachment.ebs_csi resource
aws_security_group.captain resource
aws_security_group_rule.allow_all_within_group resource
aws_security_group_rule.captain_egress_all_ipv4 resource
aws_iam_openid_connect_provider.provider data source
aws_iam_policy_document.eks_assume_addon_role data source

Inputs

Name Description Type Default Required
availability_zones The availability zones to deploy into list(string)
[
"us-west-2a",
"us-west-2b",
"us-west-2c"
]
no
coredns_version You should grab the appropriate version number from: https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html string "v1.11.3-eksbuild.2" no
csi_driver_version You should grab the appropriate version number from: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/CHANGELOG.md string "v1.37.0-eksbuild.1" no
eks_version The version of EKS to deploy string "1.30" no
iam_role_to_assume The full ARN of the IAM role to assume string n/a yes
kube_proxy_version You should grab the appropriate version number from: https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html string "v1.30.6-eksbuild.3" no
node_pools node pool configurations:
- name (string): Name of the node pool. MUST BE UNIQUE! Recommended to use YYYYMMDD in the name
- node_count (number): number of nodes to create in the node pool.
- instance_type (string): Instance type to use for the nodes. ref: https://instances.vantage.sh/
- kubernetes_version (string): Generally this is the same version as the EKS cluster. But if doing a node pool upgrade this may be a different version.
- ami_release_version (string): AMI Release version to use for EKS worker nodes. ref: https://github.com/awslabs/amazon-eks-ami/releases
- ami_type (string): e.g. AMD64 or ARM
- spot (bool): Enable spot instances for the nodes. DO NOT ENABLE IN PROD!
- disk_size_gb (number): Disk size in GB for the nodes.
- max_pods (number): max pods that can be scheduled per node.
- ssh_key_pair_names (list(string)): List of SSH key pair names to associate with the nodes. ref: https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#KeyPairs:
- kubernetes_labels (map(string)): Map of labels to apply to the nodes. ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
- kubernetes_taints (list(object)): List of taints to apply to the nodes. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
list(object({
name = string
node_count = number
instance_type = string
kubernetes_version = string
ami_release_version = string
ami_type = string
spot = bool
disk_size_gb = number
max_pods = number
ssh_key_pair_names = list(string)
kubernetes_labels = map(string)
kubernetes_taints = list(object({
key = string
value = string
effect = string
}))

}))
[
{
"ami_release_version": "1.30.6-20241121",
"ami_type": "AL2_x86_64",
"disk_size_gb": 20,
"instance_type": "t3a.large",
"kubernetes_labels": {},
"kubernetes_taints": [],
"kubernetes_version": "1.30",
"max_pods": 110,
"name": "default-pool",
"node_count": 1,
"spot": false,
"ssh_key_pair_names": []
}
]
no
peering_configs A list of maps containing VPC peering configuration details
list(object({
vpc_peering_connection_id = string
destination_cidr_block = string
}))
[] no
region The AWS region to deploy into string n/a yes
vpc_cidr_block The CIDR block for the VPC string "10.65.0.0/26" no

Outputs

No outputs.

About

This repository contains a Terraform module for deploying an Amazon EKS cluster on AWS as part of the GlueOps platform. It facilitates setting up VPCs, subnets, EKS clusters, node pools, and the necessary AWS resources for Kubernetes cluster deployment. It includes configurations for addons like CoreDNS and kube-proxy, and supports VPC peering.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published