Releases: kbst/terraform-kubestack
v0.19.2-beta.0
- EKS: Allow configuring Service CIDR by @pst in #322
- EKS: Allow configuring Service CIDR by @leewardbound in #321
- Aks autoscaler profiles by @to266 in #324
- Eks support node release version config by @mark5cinco in #323
- Fix deprecated attributes in new provider versions by @pst in #326
- GKE: Fix taints dynamic block syntax by @pst in #327
- Update CLI versions by @pst in #328
- GKE: Add support to select release channel by @pst in #330
- Allow setting GKE cluster deletion protection by @pst in #329
- GKE: Add support for maintenance exclusions by @mark5cinco in #333
- Guest accelerator settings passthrough for GKE node pools by @rzk in #337
- Adds labels support for GKE by @rzk in #336
- Fix azure provider renamed attributes by @pst in #340
- GKE: Fix maintenance exclusions, node pools in different CIDR ranges by @kgieseking in #339
- Fix duplicate variables from previous PRs, defaults for maintenance exclusions by @kgieseking in #344
- Fix logic for empty node pool instance tags by @kgieseking in #345
- Update versions by @pst in #346
- Add write packages permission for docker push by @pst in #347
- AKS: Set defaults to prevent showing diff on every plan by @pst in #348
New Contributors
- @leewardbound made their first contribution in #321
- @kgieseking made their first contribution in #344
Full Changelog: v0.19.1-beta.0...v0.19.2-beta.0
Upgrade Notes
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade to update the locked provider versions.
v0.19.1-beta.0
Full Changelog: v0.19.0-beta.0...v0.19.1-beta.0
Upgrade Notes
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
v0.19.0-beta.0
- Add guest_accelerator for GPU-capable workloads on GKE by @anhdle14 in #303
- Allow passing additional labels/tags to AKS clusters by @to266 in #311
- EKS: Allow configuring metadata service for node pools by @pst in #313
- GKE: Add location policy required from v1.24.1 by @pst in #297
- Update CLI versions by @pst in #314
Full Changelog: v0.18.2-beta.0...v0.19.0-beta.0
Upgrade Notes
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
v0.18.2-beta.0
- Add gke-gcloud-auth-plugin required for 1.24 GKE clusters by @pst in #302
- chore:
terraform fmt -recursive
by @anhdle14 in #304 - Build multi-arch container images by @pst in #305
- Update Github actions versions by @pst in #306
- Update CLI versions by @pst in #307
- Update README by @pst in #308
New Contributors
Full Changelog: v0.18.1-beta.0...v0.18.2-beta.0
Upgrade Notes
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
v0.18.1-beta.0
What's Changed
- GKE: Allow setting logging_config enabled components by @rzk in #298
- GKE: Allow setting monitoring_config enabled components by @rzk in #299
- EKS: Unlock tls provider version after upstream issue is resolved by @pst in #300
- Update CLI versions in Dockerfile by @pst in #301
Full Changelog: v0.18.0-beta.0...v0.18.1-beta.0
Upgrade Notes
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
v0.18.0-beta.0
- Migrate to final optional object attributes syntax #291
- Update GKE default image type to COS_containerd #292
- Update CLI versions to latest #294
- Handle breaking azurerm provider attribute changes #276
Full Changelog: v0.17.1-beta.0...v0.18.0-beta.0
Upgrade Notes
Upgrade the versions for each module and the source image in the Dockerfile. Also run terraform init --upgrade
to update the provider versions in the lock file.
There are no specific steps required for EKS. There are two optional steps for AKS and GKE. See below for details.
AKS
The azurerm
provider has a number of breaking changes in the latest version. Most have been handled inside the module and do not require special steps during upgrade. There is one exception, the reserved ingress IP.
Upstream refactored how zones are handled across resources. Kubestack let Azure handle the zones for the ingress IP, but previous provider versions stored them in state. The upgraded provider wants to destroy and recreate the IP because the zones in state don't match what's specified in code (null).
Users that do not want the IP to be recreated, have to set the zones explicitly to match what's in state.
configuration = {
# Settings for Apps-cluster
apps = {
# [...]
default_ingress_ip_zones = "1,2,3"
# [...]
}
# [...]
}
GKE
Starting with Kubernetes v1.24 GKE will default to COS_containerd
instead of COS
for the node image type. Kubestack follows upstream and defaults to the Containerd version for new node pools starting with this release. For existing node pools, you can set cluster_image_type
for the default node pool configured as part of the cluster module or image_type
for additional node pools to either COS_containerd
or COS
to explicitly set this for a respective node pool.
v0.17.1-beta.0
- Lock TLS provider version due to upstream issue by @pst e606b6e
- Unbundle Nginx ingress from starters by @pst in #286
- Update CLI versions esp. Terraform after upstream issue workaround by @pst in #287
Full Changelog: v0.17.0-beta.0...v0.17.1-beta.0
Upgrade Notes
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
v0.17.0-beta.0
- EKS: Remove duplicate lines by @mark5cinco in #278
- EKS: Fix deprecation warning for aws_subnet_ids by @mark5cinco in #277
- Refactor EKS and GKE kubeconfig to not use exec anymore by @pst in #279
- Remove the kubernetes providers configured inside cluster modules by @pst in #280
- Update CLI versions in container image by @pst in #281
- Fix require kubernetes provider in EKS and GKE cluster modules by @pst in #282
- Update quickstarts to configure provider outside cluster modules by @pst in #283
Full Changelog: v0.16.3-beta.0...v0.17.0-beta.0
Upgrade Notes
EKS and GKE
Both EKS and GKE until now had a kubernetes
provider configured inside the cluster module. The change away from exec based kubeconfigs allowed removing the providers configured inside the modules, and correct this anti-pattern.
This change requires that upgrading users configure the kubernetes
provider in the root module, and then pass the configured provider into the cluster module.
EKS Example
module "eks_zero" {
providers = {
aws = aws.eks_zero
# pass kubernetes provider to the EKS cluster module
kubernetes = kubernetes.eks_zero
}
# [...]
}
# configure aliased kubernetes provider EKS
provider "kubernetes" {
alias = "eks_zero"
host = local.eks_zero_kubeconfig["clusters"][0]["cluster"]["server"]
cluster_ca_certificate = base64decode(local.eks_zero_kubeconfig["clusters"][0]["cluster"]["certificate-authority-data"])
token = local.eks_zero_kubeconfig["users"][0]["user"]["token"]
}
GKE Example
module "gke_zero" {
# add providers block and pass kubernetes provider for GKE cluster module
providers = {
kubernetes = kubernetes.gke_zero
}
# [...]
}
# configure aliased kubernetes provider for GKE
provider "kubernetes" {
alias = "gke_zero"
host = local.gke_zero_kubeconfig["clusters"][0]["cluster"]["server"]
cluster_ca_certificate = base64decode(local.gke_zero_kubeconfig["clusters"][0]["cluster"]["certificate-authority-data"])
token = local.gke_zero_kubeconfig["users"][0]["user"]["token"]
}
v0.16.3-beta.0
v0.16.2-beta.0
- EKS: Stop using EKS-D Kind images #264
- EKS: Allow configuring VPC DNS options #265 - thanks @mark5cinco
- EKS: Allow disabling all logging types by setting empty string #269 - thanks @krpatel19
- GKE: Allow setting KMS key for secret encryption at rest #272 - thanks @mark5cinco
- EKS: Allow setting KMS key for secret encryption at rest #270 - thanks @mark5cinco
Upgrade Notes
No special steps required. Update the module versions and the image tag in the Dockerfile. Then run the pipeline to apply.