-
Notifications
You must be signed in to change notification settings - Fork 68
/
_plans.html.md.erb
193 lines (177 loc) · 15.4 KB
/
_plans.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
<% if current_page.data.windowsclusters == true %>
A plan defines a set of resource types used for deploying a cluster.
####<a id='plan-activate'></a> Activate a Plan
<p class="note"><strong>Note</strong>: Before configuring your Windows worker plan, you must first activate and configure <strong>Plan 1</strong>.
See <a href="installing-nsx-t.html#plans">Plans</a> in <i>Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX</i> for more information.
</p>
<% else %>
A plan defines a set of resource types used for deploying a cluster.
####<a id='plan-activate'></a> Activate a Plan
You must first activate and configure <strong>Plan 1</strong>,
<% end %>
<% if current_page.data.windowsclusters != true && current_page.data.iaas != "vSphere" && current_page.data.iaas != "vSphere-NSX-T" %>
and afterwards you can optionally activate **Plan 2** through **Plan 10**.
<% end %>
<% if current_page.data.windowsclusters != true && (current_page.data.iaas == "vSphere" || current_page.data.iaas == "vSphere-NSX-T") %>
and afterwards you can activate up to twelve additional, optional, plans.
<% end %>
To activate and configure a plan, perform the following steps:
<% if current_page.data.windowsclusters == true %>
1. Click the plan that you want to activate.
You must activate and configure either **Plan 11**, **Plan 12**, or **Plan 13** to deploy a Windows worker-based cluster.
1. Select **Active** to activate the plan and make it available to developers deploying clusters.<br>
![Plan pane configuration](images/plan11-win.png)
<% end %>
<% if current_page.data.windowsclusters != true && current_page.data.iaas != "vSphere" && current_page.data.iaas != "vSphere-NSX-T" %>
1. Click the plan that you want to activate.
<p class="note"><strong>Note</strong>: Plans 11, 12, and 13 support Windows worker-based Kubernetes clusters on vSphere with NSX, and are a beta feature on vSphere with Antrea.
</p>
1. Select **Active** to activate the plan and make it available to developers deploying clusters.<br>
![Plan pane configuration](images/plan1.png)
<% end %>
<% if current_page.data.windowsclusters != true && (current_page.data.iaas == "vSphere") %>
1. Click the plan that you want to activate.
<p class="note"><strong>Note</strong>: Plans 11, 12, and 13 support Windows worker-based Kubernetes clusters on vSphere with NSX, and are a beta feature on vSphere with Antrea.
To configure a Windows worker plan see <a href="windows-workers.html#plans">Plans</a> in <i>Configuring Windows Worker-Based
Kubernetes Clusters</i> for more information.
</p>
<% end %>
<% if current_page.data.windowsclusters != true && (current_page.data.iaas == "vSphere-NSX-T") %>
1. Click the plan that you want to activate.
<p class="note"><strong>Note</strong>: Plans 11, 12, and 13 support Windows worker-based Kubernetes clusters on vSphere with NSX, and are a beta feature on vSphere with Antrea.
To configure a Windows worker plan see <a href="windows-workers.html#plans">Plans</a> in <i>Configuring Windows Worker-Based
Kubernetes Clusters</i> for more information.
</p>
<% end %>
<% if current_page.data.windowsclusters != true && (current_page.data.iaas == "vSphere" || current_page.data.iaas == "vSphere-NSX-T") %>
1. Select **Active** to activate the plan and make it available to developers deploying clusters.<br>
![Plan pane configuration](images/plan1.png)
<% end %>
1. Under **Name**, provide a unique name for the plan.
1. Under **Description**, edit the description as needed.
The plan description appears in the Services Marketplace, which developers can access by using the TKGI CLI.
<% if current_page.data.windowsclusters == true %>
1. Select **Enable HA Linux workers** to activate high availability Linux worker clusters.
A high availability Linux worker cluster consists of three Linux worker nodes.
- Windows workers are mediated by one or three Linux workers.
- For an illustration of how Linux workers connect Windows workers to their control plane node, see [Windows Worker-Based Kubernetes Cluster High Availability](./control-plane.html#windows-ha).
<% end %>
1. Under **Master/ETCD Node Instances**, select the default number of Kubernetes control plane/etcd nodes to provision for each cluster.
You can enter <code>1</code>, <code>3</code>, or <code>5</code>.
<p class="note"><strong>Note</strong>: If you deploy a cluster with multiple control plane/etcd node VMs,
confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see <a href="https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations">Hardware recommendations</a> in the etcd documentation.<br><br>
In addition to meeting the hardware requirements for a multi-control plane node cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see <a href="monitor-etcd.html">Configuring Telegraf in TKGI</a>.</p>
<p class="note warning"><strong>WARNING</strong>: To change the number of control plane/etcd nodes for a plan, you must ensure that no existing clusters use the plan. Tanzu Kubernetes Grid Integrated Edition does not support changing the number of control plane/etcd nodes for plans with existing clusters.
</p>
1. Under **Master/ETCD VM Type**, select the type of VM to use for Kubernetes control plane/etcd nodes. For more information, including control plane node VM customization options, see the [Control Plane Node VM Size](vm-sizing.html#master-sizing) section of _VM Sizing for Tanzu Kubernetes Grid Integrated Edition Clusters_.
1. Under **Master Persistent Disk Type**, select the size of the persistent disk for the Kubernetes control plane node VM.
1. Under **Master/ETCD Availability Zones**, select one or more AZs for the Kubernetes clusters deployed by Tanzu Kubernetes Grid Integrated Edition.
If you select more than one AZ, Tanzu Kubernetes Grid Integrated Edition deploys the control plane VM in the first AZ and the worker VMs across the remaining AZs.
If you are using multiple control plane nodes, <%= vars.product_short %> deploys the control plane and worker VMs across the AZs in round-robin fashion.
<p class="note"><strong>Note:</strong> Tanzu Kubernetes Grid Integrated Edition does not support changing the AZs of existing control plane nodes.</p>
1. Under **Maximum number of workers on a cluster**, set the maximum number of
Kubernetes worker node VMs that Tanzu Kubernetes Grid Integrated Edition can deploy for each cluster. Enter any whole number in this field.
<br>
<% if current_page.data.windowsclusters == true %>
![Plan pane configuration, part two](images/plan2-win.png)
<% else %>
![Plan pane configuration, part two](images/plan2.png)
<% end %>
<br>
1. Under **Worker Node Instances**, specify the default number of Kubernetes worker nodes the TKGI CLI provisions for each cluster.
The **Worker Node Instances** setting must be less than, or equal to, the **Maximum number of workers on a cluster** setting.
<br>
For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you must have six worker nodes. For more information about PVs, see [PersistentVolumes](maintain-uptime.html#persistent-volumes) in *Maintaining Workload Uptime*. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.
<br><br>
For more information about creating clusters, see [Creating Clusters](create-cluster.html).
<p class="note"><strong>Note</strong>: Changing a plan's <strong>Worker Node Instances</strong>
setting does not alter the number of worker nodes on existing clusters.
For information about scaling an existing cluster, see
[Scale Horizontally by Changing the Number of Worker Nodes Using the TKGI CLI](scale-clusters.html#scale-horizontal)
in _Scaling Existing Clusters_.
</p>
1. Under **Worker VM Type**, select the type of VM to use for Kubernetes worker node VMs.
For more information, including worker node VM customization options,
see [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) in _VM Sizing for Tanzu Kubernetes Grid Integrated Edition Clusters_.
<p class="note"><strong>Note</strong>:
Tanzu Kubernetes Grid Integrated Edition requires a <strong>Worker VM Type</strong> with an ephemeral disk size of 32 GB or more.
</p>
<% if current_page.data.windowsclusters == true %>
<p class="note"><strong>Note:</strong> BOSH does not support persistent disks for Windows VMs.
If specifying **Worker Persistent Disk Type** on a Windows worker is a requirement for you,
submit feedback by sending an email to pcf-windows@pivotal.io.
</p>
<% else %>
1. Under **Worker Persistent Disk Type**, select the size of the persistent disk for the Kubernetes worker node VMs.
<% end %>
1. Under **Worker Availability Zones**, select one or more AZs for the Kubernetes worker nodes. Tanzu Kubernetes Grid Integrated Edition deploys worker nodes equally across the AZs you select.
1. Under **Kubelet customization - system-reserved**, enter resource values that Kubelet can use to reserve resources for system daemons.
For example, `memory=250Mi, cpu=150m`. For more information about system-reserved values,
see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved).
<% if current_page.data.windowsclusters == true %>
![Plan pane configuration, part two](images/plan2b-win.png)
<% else %>
![Plan pane configuration, part two](images/plan2b.png)
<% end %>
1. Under **Kubelet customization - eviction-hard**, enter threshold limits that Kubelet can use to evict pods when they exceed the limit. Enter limits in the format `EVICTION-SIGNAL=QUANTITY`. For example, `memory.available=100Mi, nodefs.available=10%, nodefs.inodesFree=5%`.
- In offline environments, include `imagefs.available=15%` to prevent the Kubelet garbage collector from deleting images when disk usage is high, as described in [Core Images Deleted by Garbage Collector Are Not Reloaded in TKGI Air-Gapped Environment](https://knowledge.broadcom.com/external/article?articleNumber=380917) in the Broadcom Support Knowledge Base.
- For more information about eviction thresholds, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#hard-eviction-thresholds).
<p class="note warning"><strong>WARNING</strong>: Use the Kubelet customization fields with caution. If you enter values that are invalid or that exceed the limits the system supports, Kubelet might fail to start. If Kubelet fails to start, you cannot create clusters.</p>
<% if current_page.data.windowsclusters == true %>
1. Under **Kubelet customization - Windows pause image location**, enter the location of your Windows pause image.
The **Kubelet customization - Windows pause image location** default value of `mcr.microsoft.com/k8s/core/pause:3.6`
configures Tanzu Kubernetes Grid Integrated Edition to pull the Windows pause image from the Microsoft Docker registry.
<br>The Microsoft Docker registry cannot be accessed from within air-gapped environments.
If you want to deploy Windows pods in an air-gapped environment you must upload a Windows pause image to
an accessible private registry, and configure the **Kubelet customization -
Windows pause image location** field with the URI to this accessible Windows pause image.
For more information about uploading a Windows pause image to a private registry, see
[Using a Windows Pause Image for an Air-Gapped Environment](windows-pause-internetless.html).
<% end %>
1. Under **Errand VM Type**, select the size of the VM that contains the errand.
The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the **Default Cluster App** YAML configuration.
1. (Optional) Under **(Optional) Add-ons - Use with caution**, enter additional YAML configuration to add custom workloads to each cluster in this plan.
You can specify multiple files using `---` as a separator.
For more information, see [Adding Custom Linux Workloads](custom-workloads.html).
<% if (current_page.data.iaas == "vSphere-NSX-T" || current_page.data.iaas == "vSphere") %>
1. (Optional) Select the **Allow Privileged** option to allow users to either create Pods with privileged containers
or create clusters with resizable persistent volumes using a manually installed vSphere CSI driver.
For more information about privileged mode, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers)
in the Kubernetes documentation. <br><br>
**Allow Privileged** is not required if clusters use the automatically installed vSphere CSI driver for vSphere CNS.
For information about using the automatically installed vSphere CSI driver, see [Storage](#storage-config) below and
[Deploying Cloud Native Storage (CNS) on vSphere](vsphere-cns.html). <br><br>
<% end %>
<% if (current_page.data.iaas == "AWS" || current_page.data.iaas == "Azure") %>
1. (Optional) To allow users to create pods with privileged containers, select the **Allow Privileged** option.
For more information, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers)
in the Kubernetes documentation.
<% end %>
<% if current_page.data.windowsclusters == true %>
<p class="note"><strong>Note:</strong> Windows in Kubernetes does not support privileged containers.
See <a href="https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#feature-restrictions">Feature Restrictions</a>
in the Kubernetes documentation for additional information.
</p>
<% end %>
<% if current_page.data.windowsclusters != true %>
1. (Optional) Under **Node Drain Timeout(mins)**, enter the timeout in minutes for the node to drain pods.
If you set this value to `0`, the node drain does not terminate.
![Node Drain Timeout fields](images/node-drain.png)
1. (Optional) Under **Pod Shutdown Grace Period (seconds)**, enter a timeout in seconds for the node to wait before it forces the pod to terminate. If you set this value to `-1`, the default timeout is set to the one specified by the pod.
1. (Optional) To configure when the node drains, activate the following:
* **Force node to drain even if it has running pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet.**
* **Force node to drain even if it has running DaemonSet-managed pods.**
* **Force node to drain even if it has running running pods using emptyDir.**
* **Force node to drain even if pods are still running after timeout.**
<p class="note warning"><strong>Warning:</strong> If you select <strong>Force node to drain even if pods are still running after timeout</strong>, the node halts all running workloads on pods.
Before enabling this configuration, set <strong>Node Drain Timeout</strong> to a value greater than <code>0</code>.</p>
For more information about configuring default node drain behavior, see [Worker Node Hangs Indefinitely](./troubleshoot-issues.html#upgrade-drain-hangs) in _Troubleshooting_.
<% end %>
1. Click **Save**.
<% if current_page.data.windowsclusters != true %>
####<a id='plan-deactivate'></a> Deactivate a Plan
To deactivate a plan, perform the following steps:
1. Click the plan that you want to deactivate.
1. Select **Inactive**.
1. Click **Save**.
<% end %>