A Terraform module to create an Amazon Elastic Kubernetes Service (EKS) cluster with Spot Ocean. The module will install the Ocean Controller into the cluster.
- Prerequisites
- Usage
- Examples
- Requirements
- Providers
- Modules
- Resources
- Inputs
- Outputs
- Documentation
- Getting Help
- Community
- Contributing
- License
For kubectl
to connect and interface properly with your Amazon Elastic Kubernetes Service (EKS) cluster, you have to install and configure the AWS Command Line Interface (CLI) with the aws-iam-authenticator
component. Instructions on how to install the following components can be found below:
module "ocean-eks" {
source = "spotinst/ocean-eks/spotinst"
# Credentials.
spotinst_token = var.spotinst_token
spotinst_account = var.spotinst_account
}
Name | Version |
---|---|
terraform | >= 0.13.1 |
aws | ~> 3.37 |
kubernetes | ~> 2.0 |
random | ~> 3.1 |
spotinst | ~> 1.60 |
Name | Version |
---|---|
aws | 3.62.0 |
random | 3.1.0 |
spotinst | 1.60.0 |
Name | Source | Version |
---|---|---|
eks | terraform-aws-modules/eks/aws | ~> 17.0 |
ocean-controller | spotinst/ocean-controller/spotinst | ~> 0.35 |
vpc | terraform-aws-modules/vpc/aws | >= 2.78.0 |
Name | Type |
---|---|
random_string.suffix | resource |
spotinst_ocean_aws.this | resource |
aws_availability_zones.available | data source |
aws_eks_cluster.cluster | data source |
aws_eks_cluster_auth.cluster | data source |
aws_region.current | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
ami_id | The image ID for the EKS worker nodes. If none is provided, Terraform will search for the latest version of their EKS optimized worker AMI based on platform | string |
null |
no |
associate_public_ip_address | Associate a public IP address to worker nodes | bool |
false |
no |
attach_worker_cni_policy | Whether to attach the Amazon managed AmazonEKS_CNI_Policy IAM policy to the default worker IAM role. WARNING: If set false the permissions must be assigned to the aws-node DaemonSet pods via another method or nodes will not be able to join the cluster |
bool |
true |
no |
autoscaler_cooldown | Sets cooldown period between scaling actions | number |
null |
no |
autoscaler_headroom_cpu_per_unit | Configures the number of CPUs to allocate the headroom (CPUs are denoted in millicores, where 1000 millicores = 1 vCPU) | number |
null |
no |
autoscaler_headroom_gpu_per_unit | Configures the number of GPUs to allocate the headroom | number |
null |
no |
autoscaler_headroom_memory_per_unit | Configures the amount of memory (MB) to allocate the headroom | number |
null |
no |
autoscaler_headroom_num_of_units | Sets the number of units to retain as headroom, where each unit has the defined headroom CPU and memory | number |
null |
no |
autoscaler_headroom_percentage | Sets the auto headroom percentage (a number in the range [0, 200]) which controls the percentage of headroom from the cluster. Relevant only when autoscale_is_auto_config toggled on |
number |
null |
no |
autoscaler_is_auto_config | Controls whether Ocean Auto Scaler should be auto configured | bool |
true |
no |
autoscaler_is_enabled | Controls whether Ocean Auto Scaler should be enabled | bool |
true |
no |
autoscaler_max_scale_down_percentage | Sets the maximum percentage (a number in the range [1, 100]) to scale down | number |
null |
no |
autoscaler_resource_limits_max_memory_gib | Sets the maximum memory in GiB units that can be allocated to the cluster | number |
null |
no |
autoscaler_resource_limits_max_vcpu | Sets the maximum cpu in vCPU units that can be allocated to the cluster | number |
null |
no |
aws_auth_additional_labels | Additional kubernetes labels applied on aws-auth ConfigMap | map(string) |
{} |
no |
blacklist | List of instance types not allowed in the Ocean cluster (whitelist and blacklist are mutually exclusive) |
list(string) |
null |
no |
cidr | The CIDR block for the VPC | string |
"10.0.0.0/16" |
no |
cluster_create_endpoint_private_access_sg_rule | Whether to create security group rules for the access to the Amazon EKS private API server endpoint | bool |
false |
no |
cluster_create_security_group | Whether to create a security group for the cluster or attach the cluster to cluster_security_group_id |
bool |
true |
no |
cluster_create_timeout | Timeout value when creating the EKS cluster | string |
"30m" |
no |
cluster_delete_timeout | Timeout value when deleting the EKS cluster | string |
"15m" |
no |
cluster_enabled_log_types | A list of the desired control plane logging to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) | list(string) |
[] |
no |
cluster_encryption_config | Configuration block with encryption configuration for the cluster. See examples/secrets_encryption/main.tf for example format | list(object({ |
[] |
no |
cluster_endpoint_private_access | Indicates whether or not the Amazon EKS private API server endpoint is enabled | bool |
false |
no |
cluster_endpoint_private_access_cidrs | List of CIDR blocks which can access the Amazon EKS private API server endpoint | list(string) |
null |
no |
cluster_endpoint_public_access | Indicates whether or not the Amazon EKS public API server endpoint is enabled | bool |
true |
no |
cluster_endpoint_public_access_cidrs | List of CIDR blocks which can access the Amazon EKS public API server endpoint | list(string) |
[ |
no |
cluster_iam_role_name | IAM role name for the cluster. Only applicable if manage_cluster_iam_resources is set to false | string |
"" |
no |
cluster_identifier | Cluster identifier | string |
null |
no |
cluster_log_kms_key_id | If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) | string |
"" |
no |
cluster_log_retention_in_days | Number of days to retain log events. Default retention - 90 days | number |
90 |
no |
cluster_name | Name of the EKS cluster. Also used as a prefix in names of related resources | string |
null |
no |
cluster_security_group_id | If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the workers | string |
"" |
no |
cluster_version | Kubernetes version to use for the EKS cluster | string |
"1.18" |
no |
controller_disable_auto_update | Controls whether the auto-update feature should be disabled for the Ocean Controller | bool |
false |
no |
controller_image | Set the Docker image name for the Ocean Controller that should be deployed | string |
"gcr.io/spotinst-artifacts/kubernetes-cluster-controller" |
no |
controller_node_selector | Specifies the node selector which must match a node's labels for the Ocean Controller resources to be scheduled on that node | map(string) |
null |
no |
create_eks | Controls whether EKS resources should be created (it affects almost all resources) | bool |
true |
no |
create_fargate_pod_execution_role | Controls whether the EKS Fargate pod execution IAM role should be created | bool |
true |
no |
create_ocean | Controls whether Ocean should be created (it affects all Ocean resources) | bool |
true |
no |
desired_capacity | The number of worker nodes to launch and maintain in the Ocean cluster | number |
1 |
no |
eks_oidc_root_ca_thumbprint | Thumbprint of Root CA for EKS OIDC, Valid until 2037 | string |
"9e99a48a9960b14926bb7f3b02e22da2b0ab7280" |
no |
enable_irsa | Whether to create OpenID Connect Provider for EKS to enable IRSA | bool |
false |
no |
enable_nat_gateway | Should be true if you want to provision NAT Gateways for each of your private networks | bool |
true |
no |
external_nat_ip_ids | List of EIP IDs to be assigned to the NAT Gateways (used in combination with reuse_nat_ips) | list(string) |
[] |
no |
fargate_pod_execution_role_name | The IAM Role that provides permissions for the EKS Fargate Profile | string |
null |
no |
fargate_profiles | Fargate profiles to create. See fargate_profile keys section in fargate submodule's README.md for more details |
any |
{} |
no |
iam_path | If provided, all IAM roles will be created on this path | string |
"/" |
no |
image_pull_policy | Image pull policy (one of: Always, Never, IfNotPresent) | string |
"Always" |
no |
key_name | The key pair to attach to the worker nodes launched by Ocean | string |
null |
no |
kubeconfig_aws_authenticator_additional_args | Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"] | list(string) |
[] |
no |
kubeconfig_aws_authenticator_command | Command to use to fetch AWS EKS credentials | string |
"aws-iam-authenticator" |
no |
kubeconfig_aws_authenticator_command_args | Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name] | list(string) |
[] |
no |
kubeconfig_aws_authenticator_env_variables | Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"} | map(string) |
{} |
no |
kubeconfig_file_permission | File permission of the Kubectl config file containing cluster configuration saved to kubeconfig_output_path. |
string |
"0600" |
no |
kubeconfig_name | Override the default name used for items kubeconfig | string |
"" |
no |
kubeconfig_output_path | Where to save the Kubectl config file (if write_kubeconfig = true ). Assumed to be a directory if the value ends with a forward slash / |
string |
"./" |
no |
manage_aws_auth | Whether to apply the aws-auth configmap file | bool |
true |
no |
manage_cluster_iam_resources | Whether to let the module manage cluster IAM resources. If set to false, cluster_iam_role_name must be specified | bool |
true |
no |
manage_worker_iam_resources | Whether to let the module manage worker IAM resources. If set to false, iam_instance_profile_name must be specified for workers | bool |
true |
no |
map_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format | list(string) |
[] |
no |
map_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format | list(object({ |
[] |
no |
map_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format | list(object({ |
[] |
no |
max_size | The upper limit of worker nodes the Ocean cluster can scale up to | number |
null |
no |
min_size | The lower limit of worker nodes the Ocean cluster can scale down to | number |
null |
no |
node_groups | Map of map of node groups to create. See node_groups module's documentation for more details |
any |
{} |
no |
node_groups_defaults | Map of values to be applied to all node groups. See node_groups module's documentation for more details |
any |
{} |
no |
one_nat_gateway_per_az | Should be true if you want only one NAT Gateway per availability zone | bool |
false |
no |
permissions_boundary | If provided, all IAM roles will be created with this permissions boundary attached | string |
null |
no |
private_subnets | A list of private subnets inside the VPC | list(string) |
[ |
no |
public_subnets | A list of public subnets inside the VPC | list(string) |
[ |
no |
reuse_nat_ips | Should be true if you don't want EIPs to be created for your NAT Gateways and will instead pass them in via the 'external_nat_ip_ids' variable | bool |
false |
no |
root_volume_size | The size (in GiB) to allocate for the root volume | string |
null |
no |
single_nat_gateway | Should be true if you want to provision a single shared NAT Gateway across all of your private networks | bool |
true |
no |
spot_percentage | Sets the percentage of nodes that should be Spot (vs On-Demand) in the cluster | number |
100 |
no |
spotinst_account | Spot account ID | string |
n/a | yes |
spotinst_token | Spot Personal Access token | string |
n/a | yes |
subnets | A list of subnets to place the EKS cluster and workers within | list(string) |
null |
no |
tags | A map of tags to add to all resources. Tags added to launch configuration or templates override these values for ASG Tags only | map(string) |
{} |
no |
update_policy | Configures the cluster update policy | object({ |
null |
no |
vpc_id | VPC where the cluster and workers will be deployed | string |
null |
no |
whitelist | List of instance types allowed in the Ocean cluster (whitelist and blacklist are mutually exclusive) |
list(string) |
null |
no |
worker_additional_security_group_ids | A list of additional security group ids to attach to worker instances | list(string) |
[] |
no |
worker_ami_name_filter | Name filter for AWS EKS worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used | string |
"" |
no |
worker_ami_name_filter_windows | Name filter for AWS EKS Windows worker AMI. If not provided, the latest official AMI for the specified 'cluster_version' is used | string |
"" |
no |
worker_ami_owner_id | The ID of the owner for the AMI to use for the AWS EKS workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft') | string |
"amazon" |
no |
worker_ami_owner_id_windows | The ID of the owner for the AMI to use for the AWS EKS Windows workers. Valid values are an AWS account ID, 'self' (the current account), or an AWS owner alias (e.g. 'amazon', 'aws-marketplace', 'microsoft') | string |
"amazon" |
no |
worker_create_cluster_primary_security_group_rules | Whether to create security group rules to allow communication between pods on workers and pods using the primary cluster security group | bool |
false |
no |
worker_create_initial_lifecycle_hooks | Whether to create initial lifecycle hooks provided in worker groups | bool |
false |
no |
worker_create_security_group | Whether to create a security group for the workers or attach the workers to worker_security_group_id |
bool |
true |
no |
worker_groups | A list of maps defining worker group configurations to be defined using AWS Launch Configurations. See workers_group_defaults for valid keys | any |
[ |
no |
worker_groups_launch_template | A list of maps defining worker group configurations to be defined using AWS Launch Templates. See workers_group_defaults for valid keys | any |
[] |
no |
worker_security_group_id | If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingress/egress to work with the EKS cluster | string |
"" |
no |
worker_sg_ingress_from_port | Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443) | number |
1025 |
no |
worker_user_data | User data to pass to worker node instances. If none is provided, a default Linux EKS bootstrap script is used | string |
null |
no |
workers_additional_policies | Additional policies to be added to workers | list(string) |
[] |
no |
workers_group_defaults | Override default values for target groups. See workers_group_defaults_defaults in local.tf for valid keys | any |
{} |
no |
workers_role_name | User defined workers role name | string |
"" |
no |
write_kubeconfig | Whether to write a Kubectl config file containing the cluster configuration. Saved to kubeconfig_output_path |
bool |
true |
no |
Name | Description |
---|---|
cloudwatch_log_group_arn | ARN of cloudwatch log group created |
cloudwatch_log_group_name | Name of cloudwatch log group created |
cluster_arn | The Amazon Resource Name (ARN) of the cluster |
cluster_ca_certificate | Cluster CA certificate (base64 encoded) |
cluster_certificate_authority_data | Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster |
cluster_endpoint | The endpoint for your EKS Kubernetes API |
cluster_iam_role_arn | IAM role ARN of the EKS cluster |
cluster_iam_role_name | IAM role name of the EKS cluster |
cluster_id | The name/id of the EKS cluster. Will block on cluster creation until the cluster is really ready |
cluster_oidc_issuer_url | The URL on the EKS cluster OIDC Issuer |
cluster_primary_security_group_id | The cluster primary security group ID created by the EKS cluster on 1.14 or later. Referred to as 'Cluster security group' in the EKS console |
cluster_security_group_id | Security group ID attached to the EKS cluster. On 1.14 or later, this is the 'Additional security groups' in the EKS console |
cluster_token | The token to use to authenticate with the cluster |
cluster_version | The Kubernetes server version for the EKS cluster |
config_map_aws_auth | A kubernetes configuration to authenticate to this EKS cluster |
kubeconfig | kubectl config file contents for this EKS cluster |
kubeconfig_filename | The filename of the generated kubectl config |
ocean_cluster_id | The ID of the Ocean cluster |
ocean_controller_id | The ID of the Ocean controller |
oidc_provider_arn | The ARN of the OIDC Provider if enable_irsa = true |
security_group_rule_cluster_https_worker_ingress | Security group rule responsible for allowing pods to communicate with the EKS cluster API |
worker_iam_instance_profile_arns | Default IAM instance profile ARN for EKS worker groups |
worker_iam_instance_profile_names | Default IAM instance profile name for EKS worker groups |
worker_iam_role_arn | Default IAM role ARN for EKS worker groups |
worker_iam_role_name | Default IAM role name for EKS worker groups |
worker_security_group_id | Security group ID attached to the EKS workers |
workers_default_ami_id | ID of the default worker group AMI |
workers_user_data | User data of worker groups |
If you're new to Spot and want to get started, please checkout our Getting Started guide, available on the Spot Documentation website.
We use GitHub issues for tracking bugs and feature requests. Please use these community resources for getting help:
- Ask a question on Stack Overflow and tag it with terraform-spotinst.
- Join our Spot community on Slack.
- Open an issue.
Please see the contribution guidelines.
Code is licensed under the Apache License 2.0.