Note: EKS SSP for Terraform is in active development and should be considered a pre-production framework. Backwards incompatible Terraform changes are possible in future releases and support is best-effort by the EKS SSP community.
Welcome to the Amazon EKS Shared Services Platform (SSP) for Terraform.
This repository contains the source code for a Terraform framework that aims to accelerate the delivery of a batteries-included, multi-tenant container platform on top of Amazon EKS. This framework can be used by AWS customers, partners, and internal AWS teams to implement the foundational structure of a SSP according to AWS best practices and recommendations.
This project leverages the community terraform-aws-eks modules for deploying EKS Clusters.
The easiest way to get started with this framework is to follow our Getting Started guide.
For complete project documentation, please visit our documentation directory.
To view examples for how you can leverage this framework, see the examples directory.
The below demonstrates how you can leverage this framework to deploy an EKS cluster, a managed node group, and various Kubernetes add-ons.
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
kubernetes_version = "1.21"
vpc_id = "<vpcid>" # Enter VPC ID
private_subnet_ids = ["<subnet-a>", "<subnet-b>", "<subnet-c>"] # Enter Private Subnet IDs
# EKS MANAGED NODE GROUPS
managed_node_groups = {
mg_m4l = {
node_group_name = "managed-ondemand"
instance_types = ["m4.large"]
subnet_ids = ["<subnet-a>", "<subnet-b>", "<subnet-c>"]
}
}
}
# Deploy Kubernetes Add-ons with sub module
module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"
eks_cluster_id = module.eks-ssp.eks_cluster_id
eks_oidc_issuer_url = module.eks-ssp.eks_oidc_issuer_url
eks_oidc_provider_arn = module.eks-ssp.eks_oidc_provider_arn
# EKS Addons
enable_amazon_eks_vpc_cni = true
enable_amazon_eks_coredns = true
enable_amazon_eks_kube_proxy = true
enable_amazon_eks_aws_ebs_csi_driver = true
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.managed_node_groups]
}
The code above will provision the following:
✅ A new EKS Cluster with a managed node group.
✅ Amazon EKS add-ons vpc-cni
, CoreDNS
, kube-proxy
, and aws-ebs-csi-driver
.
✅ Cluster Autoscaler
and Metrics Server
for scaling your workloads.
✅ Fluent Bit
for routing metrics.
✅ AWS Load Balancer Controller
for distributing traffic.
✅ cert-manager
for managing SSL/TLS certificates.
✅ Argocd
for declarative GitOps CD for Kubernetes.
✅ Nginx
for managing ingress.
This framework provides out of the box support for a wide range of popular Kubernetes add-ons. By default, the Terraform Helm provider is used to deploy add-ons with publicly available Helm Charts. The framework provides support however for leveraging self-hosted Helm Chart as well.
For complete documentation on deploying add-ons, please visit our add-on documentation
The root module calls into several submodules which provides support for deploying and integrating a number of external AWS services that can be used in concert with Amazon EKS. This included Amazon Managed Prometheus and EMR with EKS etc.
For complete documentation on deploying external services, please visit our submodules documentation.
The Amazon EKS SSP for Terraform allows customers to easily configure and deploy a multi-tenant, enterprise-ready container platform on top of EKS. With a large number of design choices, deploying production-grade container platform can take a significant amount of time, involve integrating a wide range or AWS services and open source tools, and require deep understand of AWS and Kubernetes concepts.
This solution handles integrating EKS with popular open source and partner tools, in addition to AWS services, in order to allow customers to deploy a cohesive container platform that can be offered as a service to application teams. It provides out-of-the-box support for common operational tasks such as auto-scaling workloads, collecting logs and metrics from both clusters and running applications, managing ingress and egress, configuring network policy, managing secrets, deploying workloads via GitOps, and more. Customers can leverage the solution to deploy a container platform and start onboarding workloads in days, rather than months.
For architectural details, step-by-step instructions, and customization options, see our official documentation site.
To post feedback, submit feature ideas, or report bugs, use the Issues section of this GitHub repo.
To submit code for this Quick Start, see the AWS Quick Start Contributor's Kit.
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: MIT-0
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Name | Version |
---|---|
terraform | >= 1.0.0 |
aws | >= 3.66.0 |
helm | >= 2.4.1 |
http | 2.4.1 |
kubectl | >= 1.7.0 |
kubernetes | >= 2.7.1 |
local | 2.1.0 |
null | 3.1.0 |
Name | Version |
---|---|
aws | >= 3.66.0 |
http | 2.4.1 |
kubernetes | >= 2.7.1 |
Name | Source | Version |
---|---|---|
aws_eks | terraform-aws-modules/eks/aws | v17.20.0 |
aws_eks_fargate_profiles | ./modules/aws-eks-fargate-profiles | n/a |
aws_eks_managed_node_groups | ./modules/aws-eks-managed-node-groups | n/a |
aws_eks_self_managed_node_groups | ./modules/aws-eks-self-managed-node-groups | n/a |
aws_eks_teams | ./modules/aws-eks-teams | n/a |
aws_managed_prometheus | ./modules/aws-managed-prometheus | n/a |
eks_tags | ./modules/aws-resource-tags | n/a |
emr_on_eks | ./modules/emr-on-eks | n/a |
Name | Type |
---|---|
aws_kms_key.eks | resource |
kubernetes_config_map.amazon_vpc_cni | resource |
kubernetes_config_map.aws_auth | resource |
aws_availability_zones.available | data source |
aws_caller_identity.current | data source |
aws_eks_cluster.cluster | data source |
aws_eks_cluster_auth.cluster | data source |
aws_partition.current | data source |
aws_region.current | data source |
http_http.eks_cluster_readiness | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
amazon_prometheus_workspace_alias | AWS Managed Prometheus WorkSpace Name | string |
null |
no |
application_teams | Map of maps of Application Teams to create | any |
{} |
no |
aws_auth_additional_labels | Additional kubernetes labels applied on aws-auth ConfigMap | map(string) |
{} |
no |
cluster_enabled_log_types | A list of the desired control plane logging to enable | list(string) |
[ |
no |
cluster_endpoint_private_access | Indicates whether or not the EKS private API server endpoint is enabled. Default to EKS resource and it is false | bool |
false |
no |
cluster_endpoint_public_access | Indicates whether or not the EKS public API server endpoint is enabled. Default to EKS resource and it is true | bool |
true |
no |
cluster_log_retention_in_days | Number of days to retain log events. Default retention - 90 days. | number |
90 |
no |
cluster_log_retention_period | Number of days to retain cluster logs | number |
7 |
no |
create_eks | Create EKS cluster | bool |
false |
no |
emr_on_eks_teams | EMR on EKS Teams config | any |
{} |
no |
enable_amazon_prometheus | Enable AWS Managed Prometheus service | bool |
false |
no |
enable_emr_on_eks | Enable EMR on EKS | bool |
false |
no |
enable_irsa | Enable IAM Roles for Service Accounts | bool |
true |
no |
enable_windows_support | Enable Windows support | bool |
false |
no |
environment | Environment area, e.g. prod or preprod | string |
"preprod" |
no |
fargate_profiles | Fargate profile configuration | any |
{} |
no |
kubernetes_version | Desired kubernetes version. If you do not specify a value, the latest available version is used | string |
"1.21" |
no |
managed_node_groups | Managed node groups configuration | any |
{} |
no |
map_accounts | Additional AWS account numbers to add to the aws-auth ConfigMap | list(string) |
[] |
no |
map_roles | Additional IAM roles to add to the aws-auth ConfigMap | list(object({ |
[] |
no |
map_users | Additional IAM users to add to the aws-auth ConfigMap | list(object({ |
[] |
no |
org | tenant, which could be your organization name, e.g. aws' | string |
"" |
no |
platform_teams | Map of maps of platform teams to create | any |
{} |
no |
private_subnet_ids | List of private subnets Ids for the worker nodes | list(string) |
n/a | yes |
public_subnet_ids | List of public subnets Ids for the worker nodes | list(string) |
[] |
no |
self_managed_node_groups | Self-managed node groups configuration | any |
{} |
no |
tags | Additional tags (e.g. map('BusinessUnit ,XYZ ) |
map(string) |
{} |
no |
tenant | Account Name or unique account unique id e.g., apps or management or aws007 | string |
"aws" |
no |
terraform_version | Terraform version | string |
"Terraform" |
no |
vpc_id | VPC Id | string |
n/a | yes |
worker_additional_security_group_ids | A list of additional security group ids to attach to worker instances | list(string) |
[] |
no |
worker_create_security_group | Whether to create a security group for the workers or attach the workers to worker_security_group_id . |
bool |
true |
no |
zone | zone, e.g. dev or qa or load or ops etc... | string |
"dev" |
no |
Name | Description |
---|---|
amazon_prometheus_workspace_endpoint | Amazon Managed Prometheus Workspace Endpoint |
cluster_primary_security_group_id | EKS Cluster Security group ID |
cluster_security_group_id | EKS Control Plane Security Group ID |
configure_kubectl | Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig |
eks_cluster_id | Kubernetes Cluster Name |
eks_oidc_issuer_url | The URL on the EKS cluster OIDC Issuer |
eks_oidc_provider_arn | The ARN of the OIDC Provider if enable_irsa = true . |
emr_on_eks_role_arn | IAM execution role ARN for EMR on EKS |
emr_on_eks_role_id | IAM execution role ID for EMR on EKS |
fargate_profiles | Outputs from EKS Fargate profiles groups |
fargate_profiles_aws_auth_config_map | Fargate profiles AWS auth map |
fargate_profiles_iam_role_arns | IAM role arn's for Fargate Profiles |
managed_node_group_aws_auth_config_map | Managed node groups AWS auth map |
managed_node_group_iam_role_arns | IAM role arn's of managed node groups |
managed_node_groups | Outputs from EKS Managed node groups |
self_managed_node_group_autoscaling_groups | Autoscaling group names of self managed node groups |
self_managed_node_group_aws_auth_config_map | Self managed node groups AWS auth map |
self_managed_node_group_iam_role_arns | IAM role arn's of self managed node groups |
self_managed_node_groups | Outputs from EKS Self-managed node groups |
teams | Outputs from EKS Fargate profiles groups |
windows_node_group_aws_auth_config_map | Windows node groups AWS auth map |
worker_security_group_id | EKS Worker Security group ID created by EKS module |
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.