- EKS cluster with EBS and EFS and ALB
Executed from a container image, so it assumes you have a container runtime on your machine, such as Docker or Podman:
I used image icr.io/continuous-delivery/pipeline/pipeline-base-ubi:3.15
but any other Linux-based distro would work.
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_REGION=
cluster_name=#...
cluster_region=${AWS_REGION}
hosted_zone_domain=#...
dns_domain=#...${hosted_zone_domain:?}
The installation needs the following CLIs:
- aws
- eksctl
- curl
- git
- kubectl
Extracted from https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
mv /tmp/eksctl /usr/local/bin
eksctl version
Alternative, unexplored, possibility: https://github.com/aws-samples/amazon-eks-refarch-cloudformation
Right now, created both VPC and EKS through AWS console:
eksctl create cluster \
--region "${cluster_region:?}" \
--name "${cluster_name:?}" \
--with-oidc \
--full-ecr-access \
--nodegroup-name=workers \
--node-ami-family AmazonLinux2 \
--node-type m5.2xlarge \
--nodes 3
# For autoscaling
# --nodes-min 2 \
# --nodes-max 4
aws eks update-kubeconfig \
--region "${cluster_region:?}" \
--name "${cluster_name:?}" \
&& kubectl get namespaces
Example output
NAME STATUS AGE
default Active 20m
kube-node-lease Active 20m
kube-public Active 20m
kube-system Active 20m
This process generates a new kubeconfig
file to be handed to another user.
That other user will need then to export the KUBECONFIG variable to point at this file.
To generate the new configuration file:
user=cluster-admin
export KUBECONFIG="${HOME}/.kube/config"
aws eks update-kubeconfig \
--region "${cluster_region:?}" \
--name "${cluster_name:?}" \
&& kubectl get namespaces
cluster_arn=$(aws eks describe-cluster \
--region ${cluster_region:?} \
--name ${cluster_name:?} \
--query 'cluster.arn' \
--output text)
cluster_endpoint=$(aws eks describe-cluster \
--region ${cluster_region:?} \
--name ${cluster_name:?} \
--query 'cluster.endpoint' \
--output text)
kubectl get serviceaccount "${user:?}" 2> /dev/null \
|| kubectl create serviceaccount "${user}"
kubectl get clusterrolebinding "${user}-binding" 2> /dev/null \
|| kubectl create clusterrolebinding "${user}-binding" \
--clusterrole "${user}" \
--serviceaccount default:cluster-admin
kube_token=$(kubectl create token "${user}" --duration=100h)
export KUBECONFIG="${HOME}/kubeconfig-${cluster_name:?}"
rm -rf "${KUBECONFIG:?}"
kubectl config set-cluster "${cluster_arn}" \
--server "${cluster_endpoint}" \
--insecure-skip-tls-verify=true
kubectl config set-context "${cluster_arn}" \
--cluster "${cluster_arn}" \
--user cluster-admin
kubectl config use-context "${cluster_arn}"
kubectl config set-credentials cluster-admin --token="${kube_token}"
Now share the file in $KUBECONFIG with the user. Keep in mind that this file contains an administrative bearer token to the cluster, so handle it with the same care as you would handle the sharing of any other credential or password.
The user, after making a local copy of the file, should be able to export their local KUBECONFIG
environment variable to match the full pathname of the file and use the kubectl
CLI to interact with the cluster.
References:
- https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
- https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html
- https://docs.aws.amazon.com/cli/latest/reference/eks/create-addon.html
Extracted from: https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
The cluster needs an OIDC provider. Note that when the cluster is created with the --with-oidc
, the block below is redundant.
oidc_id=$(aws eks describe-cluster \
--region "${cluster_region}" \
--name "${cluster_name}" \
--query "cluster.identity.oidc.issuer" \
--output text | cut -d '/' -f 5) \
oidc_provider=$(aws iam list-open-id-connect-providers \
| grep $oidc_id \
| cut -d "/" -f4)
if [ -z ${oidc_provider} ]; then
eksctl utils associate-iam-oidc-provider \
--region "${cluster_region}" \
--cluster "${cluster_name}" \
--approve
oidc_provider=$(aws iam list-open-id-connect-providers \
--region "${cluster_region}" \
| grep $oidc_id | cut -d "/" -f4)
echo "OIDC Provider ${oidc_provider}"
fi
Extracted from https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html.
role_name=AmazonEKS_EBS_CSI_DriverRole
eksctl create iamserviceaccount \
--region "${cluster_region}" \
--cluster "${cluster_name}" \
--name ebs-csi-controller-sa \
--namespace kube-system \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve \
--role-only \
--role-name "${role_name}" \
--override-existing-serviceaccounts
role_arn=$(aws iam get-role \
--region "${cluster_region}" \
--role-name "${role_name}" \
--query 'Role.Arn' \
--output text)
aws eks create-addon \
--region "${cluster_region}" \
--cluster-name "${cluster_name}" \
--addon-name aws-ebs-csi-driver \
--addon-version v1.16.1-eksbuild.1 \
--service-account-role-arn "${role_arn}" \
&& aws eks wait addon-active \
--region "${cluster_region}" \
--cluster-name "${cluster_name}" \
--addon-name aws-ebs-csi-driver \
&& aws eks describe-addon \
--region "${cluster_region}" \
--cluster-name "${cluster_name}" \
--addon-name aws-ebs-csi-driver
Deploy a sample application and verify that the CSI driver is working
cd /tmp
rm -rf aws-ebs-csi-driver
git clone https://github.com/kubernetes-sigs/aws-ebs-csi-driver.git
cd aws-ebs-csi-driver/examples/kubernetes/dynamic-provisioning/
kubectl apply -f manifests/
kubectl describe storageclass ebs-sc
kubectl get pv
Extracted from https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
Extracted from https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html.
The cluster needs an OIDC provider. Note that when the cluster is created with the --with-oidc
, the block below is redundant.
oidc_id=$(aws eks describe-cluster \
--region "${cluster_region}" \
--name "${cluster_name}" \
--query "cluster.identity.oidc.issuer" \
--output text | cut -d '/' -f 5)
oidc_provider=$(aws iam list-open-id-connect-providers \
--region "${cluster_region}" \
| grep $oidc_id | cut -d "/" -f4)
if [ -z ${oidc_provider} ]; then
eksctl utils associate-iam-oidc-provider \
--region "${cluster_region}" \
--cluster "${cluster_name}" \
--approve
oidc_provider=$(aws iam list-open-id-connect-providers \
--region "${cluster_region}" \
| grep $oidc_id | cut -d "/" -f4)
echo "OIDC Provider ${oidc_provider}"
fi
cd /tmp
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json
efs_policy_name=AmazonEKS_EFS_CSI_Driver_Policy
policy_arn=$(aws iam list-policies \
--scope Local \
--query "Policies[?PolicyName=='${efs_policy_name}'].Arn" \
--output text)
if [ -z "${efs_policy_name}" ]; then
aws iam create-policy \
--policy-name "${efs_policy_name:?}" \
--policy-document file://iam-policy-example.json
policy_arn=$(aws iam list-policies \
--scope Local \
--query "Policies[?PolicyName=='${efs_policy_name}'].Arn" \
--output text)
fi
eksctl create iamserviceaccount \
--region "${cluster_region}" \
--cluster "${cluster_name}" \
--namespace kube-system \
--name efs-csi-controller-sa \
--attach-policy-arn "${policy_arn:?}" \
--approve
Need to get this for the specific cloud region from https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html.
# this is the id for the us-east-2 region
image_repository_id=602401143452
helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/
helm repo update
helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
--namespace kube-system \
--set image.repository=${image_repository_id:?}.dkr.ecr.${cluster_region:?}.amazonaws.com/eks/aws-efs-csi-driver \
--set controller.serviceAccount.create=false \
--set controller.serviceAccount.name=efs-csi-controller-sa
kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-efs-csi-driver,app.kubernetes.io/instance=aws-efs-csi-driver"
vpc_id=$(aws eks describe-cluster \
--region "${cluster_region:?}" \
--name "${cluster_name:?}" \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
cidr_range=$(aws ec2 describe-vpcs \
--region "${cluster_region}" \
--vpc-ids "${vpc_id:?}" \
--query "Vpcs[].CidrBlock" \
--output text)
security_group_id=$(aws ec2 create-security-group \
--region "${cluster_region}" \
--vpc-id "${vpc_id:?}" \
--group-name MyEfsSecurityGroup \
--description "My EFS security group" \
--output text)
aws ec2 authorize-security-group-ingress \
--region "${cluster_region}" \
--group-id "${security_group_id:?}" \
--protocol tcp \
--port 2049 \
--cidr "${cidr_range:?}" \
local filesystem_name="aws-fs-${cluster_name}" \
file_system_id=$(aws efs create-file-system \
--region "${cluster_region}" \
--creation-token "${filesystem_name}" \
--tags Key=Name,Value="${filesystem_name}"
--performance-mode generalPurpose \
--query 'FileSystemId' \
--output text)
aws ec2 describe-subnets \
--region "${cluster_region}" \
--filters "Name=vpc-id,Values=${vpc_id}" \
--query 'Subnets[*].SubnetId' \
--output text | tr -s "\\t" "\\n" \
| xargs -I {} aws efs create-mount-target \
--region "${cluster_region}" \
--file-system-id "${file_system_id:?}" \
--subnet-id {} \
--security-groups "${security_group_id}"
curl -sL https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml \
| sed "s/fileSystemId:.*/fileSystemId: ${file_system_id:?}/" \
| kubectl apply -f -
Extracted from https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
alb_policy_name=AWSLoadBalancerControllerIAMPolicy
alb_policy_arn=$(aws iam list-policies \
--scope Local \
--query "Policies[?PolicyName=='${alb_policy_name}'].Arn" \
--output text)
if [ -z "${alb_policy_arn}" ]; then
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.7/docs/install/iam_policy.json
aws iam create-policy \
--policy-name "${alb_policy_name}" \
--policy-document file://iam_policy.json
alb_policy_arn=$(aws iam list-policies \
--scope Local \
--query "Policies[?PolicyName=='${alb_policy_name:?}'].Arn" \
--output text)
fi
eksctl create iamserviceaccount \
--region "${cluster_region:?}" \
--cluster "${cluster_name:?}" \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn="${alb_policy_arn:?}" \
--approve
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName="${cluster_name:?}" \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
kubectl get deployment -n kube-system aws-load-balancer-controller
Extracted from https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html.
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.7/docs/examples/2048/2048_full.yaml
sleep 60
kubectl get ingress/ingress-2048 -n game-2048
The overall procedure is to create a hosted zone dedicated to the cluster, then fill out the records for ADS installation.
zone_id=$(aws route53 list-hosted-zones \
--query "HostedZones[?Name==\`${hosted_zone_domain:?}.\`].Id" \
--output text \
| cut -d "/" -f 3)
if [ -z "${zone_id}" ]; then
caller_reference="$(date +%s)"
aws route53 create-hosted-zone \
--name "${dns_domain:?}" \
--caller-reference "${caller_reference}" \
--hosted-zone-config Comment="Hosted zone for ADS installation"
zone_id=$(aws route53 list-hosted-zones \
--query "HostedZones[?Name==\`${dns_domain}.\`].Id" \
--output text \
| cut -d "/" -f 3)
fi
k8s_hostname=game-2048
k8s_ingress_stack="game-2048/ingress-2048"
k8s_alb_arn=$(aws resourcegroupstaggingapi get-resources \
--region "${cluster_region:?}" \
--resource-type-filters elasticloadbalancing:loadbalancer \
--tag-filters "Key=elbv2.k8s.aws/cluster,Values=${cluster_name}" "Key=ingress.k8s.aws/stack,Values=${k8s_ingress_stack:?}" \
--query 'ResourceTagMappingList[].ResourceARN' \
--output text)
alb_dns_zone=$(aws elbv2 describe-load-balancers \
--region "${cluster_region:?}" \
--load-balancer-arns "${k8s_alb_arn:?}" \
--query "LoadBalancers[*].[DNSName,CanonicalHostedZoneId]" \
--output text \
| tr -s "\\t" " ")
alb_dns=${alb_dns_zone// */}
alb_zone=${alb_dns_zone//* /}
batch_payload=/tmp/batch_payload.yaml
cat <<EOF > "${batch_payload}"
{
"Comment": "Creating Alias resource record sets in Route 53",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "${k8s_hostname}.${dns_domain}",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "${alb_zone}",
"DNSName": "dualstack.${alb_dns}",
"EvaluateTargetHealth": true
}
}
}
]
}
EOF
aws route53 change-resource-record-sets \
--hosted-zone-id ${zone_id} \
--change-batch "file://${batch_payload:?}"
under construction, needs to remove Route53, EFS resources, EBS resources from AWS account, but still not set on the list/cmds.
aws route53 change-resource-record-sets \
--hosted-zone-id "${zone_id:?}" \
--change-batch '{"Changes":[{"Action":"DELETE","ResourceRecordSet":{"Name":"ads-eks.somedomain.com.","Type":"A"}}]}'
aws efs describe-mount-targets \
--region "${cluster_region}" \
--query 'MountTargets[*].MountTargetId' \
--file-system-id ${file_system_id} \
--output text | tr -s "\\t" "\\n" \
| xargs -I {} aws efs delete-mount-target \
--region "${cluster_region}" \
--mount-target-id {}
aws efs delete-file-system \
--region "${cluster_region}" \
--file-system-id ${file_system_id}
eksctl delete cluster \
--region "${cluster_region:?}" \
--name "${cluster_name:?}"
aws elbv2 describe-load-balancers \
--region "${cluster_region}" \
--query 'LoadBalancers[?Tags[?Key==`elbv2.k8s.aws/cluster` && Value==`${cluster_name}`]].LoadBalancerArn' \
--output text \
| xargs -I {} aws elbv2 delete-load-balancer --load-balancer-arn {}