From 14fe269673e4c92e59a913fe3a4c7e1850c7f4c1 Mon Sep 17 00:00:00 2001 From: Saurabh Parekh Date: Sun, 2 Jun 2024 21:13:32 -0700 Subject: [PATCH] Update docs for worker node group config --- .../getting-started/baremetal/bare-spec.md | 28 +++++++++---------- .../getting-started/cloudstack/cloud-spec.md | 16 +++++------ .../getting-started/nutanix/nutanix-spec.md | 16 +++++------ .../en/docs/getting-started/snow/snow-spec.md | 16 +++++------ .../getting-started/vsphere/vsphere-spec.md | 16 +++++------ 5 files changed, 46 insertions(+), 46 deletions(-) diff --git a/docs/content/en/docs/getting-started/baremetal/bare-spec.md b/docs/content/en/docs/getting-started/baremetal/bare-spec.md index 73e66dd2e03f..879b4f6c40f9 100644 --- a/docs/content/en/docs/getting-started/baremetal/bare-spec.md +++ b/docs/content/en/docs/getting-started/baremetal/bare-spec.md @@ -183,63 +183,63 @@ You can omit `workerNodeGroupConfigurations` when creating Bare Metal clusters. >**_NOTE:_** Empty `workerNodeGroupConfigurations` is not supported when Kubernetes version <= 1.21. -### workerNodeGroupConfigurations.count (optional) +### workerNodeGroupConfigurations[*].count (optional) Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`. Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation. -### workerNodeGroupConfigurations.machineGroupRef (required) +### workerNodeGroupConfigurations[*].machineGroupRef (required) Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See `TinkerbellMachineConfig Fields` below. -### workerNodeGroupConfigurations.name (required) +### workerNodeGroupConfigurations[*].name (required) Name of the worker node group (default: md-0) -### workerNodeGroupConfigurations.autoscalingConfiguration (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration (optional) Configuration parameters for Cluster Autoscaler. >**_NOTE:_** Autoscaling configuration is not supported when using the `InPlace` upgrade rollout strategy. -### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional) Minimum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional) Maximum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.taints (optional) +### workerNodeGroupConfigurations[*].taints (optional) A list of taints to apply to the nodes in the worker node group. Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration. At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it. -### workerNodeGroupConfigurations.labels (optional) +### workerNodeGroupConfigurations[*].labels (optional) A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration. -### workerNodeGroupConfigurations.kubernetesVersion (optional) +### workerNodeGroupConfigurations[*].kubernetesVersion (optional) The Kubernetes version you want to use for this worker node group. [Supported values]({{< relref "../../concepts/support-versions/#kubernetes-versions" >}}): `1.28`, `1.27`, `1.26`, `1.25`, `1.24` Must be less than or equal to the cluster `kubernetesVersion` defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane's Kubernetes version. Removing `workerNodeGroupConfiguration.kubernetesVersion` will trigger an upgrade of the node group to the `kubernetesVersion` defined at the root level of the cluster spec. -#### workerNodeGroupConfigurations.upgradeRolloutStrategy (optional) +#### workerNodeGroupConfigurations[*].upgradeRolloutStrategy (optional) Configuration parameters for upgrade strategy. -#### workerNodeGroupConfigurations.upgradeRolloutStrategy.type (optional) +#### workerNodeGroupConfigurations[*].upgradeRolloutStrategy.type (optional) Default: `RollingUpdate` Type of rollout strategy. Supported values: `RollingUpdate`,`InPlace`. >**_NOTE:_** The upgrade rollout strategy type must be the same for all control plane and worker nodes. -#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate (optional) +#### workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate (optional) Configuration parameters for customizing rolling upgrade behavior. >**_NOTE:_** The rolling update parameters can only be configured if `upgradeRolloutStrategy.type` is `RollingUpdate`. -#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxSurge (optional) +#### workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate.maxSurge (optional) Default: 1 This can not be 0 if maxUnavailable is 0. @@ -248,7 +248,7 @@ The maximum number of machines that can be scheduled above the desired number of Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n. -#### workerNodeGroupConfigurations.upgradeRolloutStrategy.rollingUpdate.maxUnavailable (optional) +#### workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate.maxUnavailable (optional) Default: 0 This can not be 0 if MaxSurge is 0. diff --git a/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md b/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md index 3fe18e909908..4f783ebf50d8 100644 --- a/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md +++ b/docs/content/en/docs/getting-started/cloudstack/cloud-spec.md @@ -234,31 +234,31 @@ If this is a standalone cluster or if it were serving as the management cluster This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups. -### workerNodeGroupConfigurations.count (required) +### workerNodeGroupConfigurations[*].count (required) Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`. Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation. -### workerNodeGroupConfigurations.machineGroupRef (required) +### workerNodeGroupConfigurations[*].machineGroupRef (required) Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See `CloudStackMachineConfig Fields` below. -### workerNodeGroupConfigurations.name (required) +### workerNodeGroupConfigurations[*].name (required) Name of the worker node group (default: md-0) -### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional) Minimum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional) Maximum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.taints (optional) +### workerNodeGroupConfigurations[*].taints (optional) A list of taints to apply to the nodes in the worker node group. Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration. At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it. -### workerNodeGroupConfigurations.labels (optional) +### workerNodeGroupConfigurations[*].labels (optional) A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. A special label value is supported by the CAPC provider: @@ -272,7 +272,7 @@ The `ds.meta_data.failuredomain` value will be replaced with a failuredomain nam Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration. -### workerNodeGroupConfigurations.kubernetesVersion (optional) +### workerNodeGroupConfigurations[*].kubernetesVersion (optional) The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24 ## CloudStackDatacenterConfig diff --git a/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md b/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md index c2ca778fc1e0..e004e4e09902 100644 --- a/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md +++ b/docs/content/en/docs/getting-started/nutanix/nutanix-spec.md @@ -189,24 +189,24 @@ creation process are [here]({{< relref "./nutanix-prereq/#prepare-a-nutanix-envi ### workerNodeGroupConfigurations (required) This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups. -### workerNodeGroupConfigurations.count (required) +### workerNodeGroupConfigurations[*].count (required) Number of worker nodes. Optional if `autoscalingConfiguration` is used, in which case count will default to `autoscalingConfiguration.minCount`. Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation. -### workerNodeGroupConfigurations.machineGroupRef (required) +### workerNodeGroupConfigurations[*].machineGroupRef (required) Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See `NutanixMachineConfig` fields below. -### workerNodeGroupConfigurations.name (required) +### workerNodeGroupConfigurations[*].name (required) Name of the worker node group (default: `md-0`) -### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional) -Minimum number of nodes for this node group’s autoscaling configuration. +### workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional) +Minimum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional) -Maximum number of nodes for this node group’s autoscaling configuration. +### workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional) +Maximum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.kubernetesVersion (optional) +### workerNodeGroupConfigurations[*].kubernetesVersion (optional) The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24 ### externalEtcdConfiguration.count (optional) diff --git a/docs/content/en/docs/getting-started/snow/snow-spec.md b/docs/content/en/docs/getting-started/snow/snow-spec.md index 3f1834b9d6fe..17debe8f5f0b 100644 --- a/docs/content/en/docs/getting-started/snow/snow-spec.md +++ b/docs/content/en/docs/getting-started/snow/snow-spec.md @@ -146,38 +146,38 @@ the existing nodes. This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups. -### workerNodeGroupConfigurations.count (required) +### workerNodeGroupConfigurations[*].count (required) Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`. Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation. -### workerNodeGroupConfigurations.machineGroupRef (required) +### workerNodeGroupConfigurations[*].machineGroupRef (required) Refers to the Kubernetes object with Snow specific configuration for your nodes. See `SnowMachineConfig Fields` below. -### workerNodeGroupConfigurations.name (required) +### workerNodeGroupConfigurations[*].name (required) Name of the worker node group (default: md-0) -### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional) Minimum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional) Maximum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.taints (optional) +### workerNodeGroupConfigurations[*].taints (optional) A list of taints to apply to the nodes in the worker node group. Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration. At least one node group must not have `NoSchedule` or `NoExecute` taints applied to it. -### workerNodeGroupConfigurations.labels (optional) +### workerNodeGroupConfigurations[*].labels (optional) A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration. -### workerNodeGroupConfigurations.kubernetesVersion (optional) +### workerNodeGroupConfigurations[*].kubernetesVersion (optional) The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24 ### externalEtcdConfiguration.count (optional) diff --git a/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md b/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md index 588527c998b0..438026c66ef0 100644 --- a/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md +++ b/docs/content/en/docs/getting-started/vsphere/vsphere-spec.md @@ -158,38 +158,38 @@ the existing nodes. This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups. -### workerNodeGroupConfigurations.count (required) +### workerNodeGroupConfigurations[*].count (required) Number of worker nodes. Optional if the [cluster autoscaler curated package]({{< relref "../../packages/cluster-autoscaler/addclauto" >}}) is installed and autoscalingConfiguration is used, in which case count will default to `autoscalingConfiguration.minCount`. Refers to [troubleshooting machine health check remediation not allowed]({{< relref "../../troubleshooting/troubleshooting/#machine-health-check-shows-remediation-is-not-allowed" >}}) and choose a sufficient number to allow machine health check remediation. -### workerNodeGroupConfigurations.machineGroupRef (required) +### workerNodeGroupConfigurations[*].machineGroupRef (required) Refers to the Kubernetes object with vsphere specific configuration for your nodes. See [VSphereMachineConfig Fields](#vspheremachineconfig-fields) below. -### workerNodeGroupConfigurations.name (required) +### workerNodeGroupConfigurations[*].name (required) Name of the worker node group (default: md-0) -### workerNodeGroupConfigurations.autoscalingConfiguration.minCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional) Minimum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.autoscalingConfiguration.maxCount (optional) +### workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional) Maximum number of nodes for this node group's autoscaling configuration. -### workerNodeGroupConfigurations.taints (optional) +### workerNodeGroupConfigurations[*].taints (optional) A list of taints to apply to the nodes in the worker node group. Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration. At least one node group must **NOT** have `NoSchedule` or `NoExecute` taints applied to it. -### workerNodeGroupConfigurations.labels (optional) +### workerNodeGroupConfigurations[*].labels (optional) A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration. -### workerNodeGroupConfigurations.kubernetesVersion (optional) +### workerNodeGroupConfigurations[*].kubernetesVersion (optional) The Kubernetes version you want to use for this worker node group. [Supported values]({{< relref "../../concepts/support-versions/#kubernetes-versions" >}}): `1.28`, `1.27`, `1.26`, `1.25`, `1.24` Must be less than or equal to the cluster `kubernetesVersion` defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane's Kubernetes version. Removing `workerNodeGroupConfiguration.kubernetesVersion` will trigger an upgrade of the node group to the `kubernetesVersion` defined at the root level of the cluster spec.