diff --git a/docs/capx/latest b/docs/capx/latest index 4ba4befb..9ac194b0 120000 --- a/docs/capx/latest +++ b/docs/capx/latest @@ -1 +1 @@ -v1.2.x \ No newline at end of file +v1.3.x \ No newline at end of file diff --git a/docs/capx/v1.2.x/validated_integrations.md b/docs/capx/v1.2.x/validated_integrations.md index 39a314ff..56aa5f24 100644 --- a/docs/capx/v1.2.x/validated_integrations.md +++ b/docs/capx/v1.2.x/validated_integrations.md @@ -27,30 +27,31 @@ Nutanix follows the version validation policies below: ## Validated versions ### Cluster-API -| CAPX | CAPI v1.1.4+ | CAPI v1.2.x | CAPI v1.3.x | -|--------|--------------|-------------|-------------| -| v1.2.x | No | No | Yes | -| v1.1.x | No | Yes | Yes | -| v1.0.x | Yes | Yes | No | -| v0.5.x | Yes | Yes | No | +| CAPX | CAPI v1.1.4+ | CAPI v1.2.x | CAPI v1.3.x | CAPI v1.4.x | CAPI v1.5.x | +|--------|--------------|-------------|-------------|-------------|-------------| +| v1.2.x | No | No | Yes | Yes | Yes | +| v1.1.x | No | Yes | Yes | No | No | +| v1.0.x | Yes | Yes | No | No | No | +| v0.5.x | Yes | Yes | No | No | No | See the [Validated Kubernetes Versions](https://cluster-api.sigs.k8s.io/reference/versions.html?highlight=version#supported-kubernetes-versions){target=_blank} page for more information on CAPI validated versions. ### AOS -| CAPX | 5.20.4.5 (LTS) | 6.1.1.5 (STS) | 6.5.x (LTS) | -|--------|----------------|---------------|---------------| -| v1.2.x | No | No | Yes | -| v1.1.x | No | No | Yes | -| v1.0.x | Yes | Yes | No | -| v0.5.x | Yes | Yes | No | +| CAPX | 5.20.4.5 (LTS) | 6.1.1.5 (STS) | 6.5.x (LTS) |6.6 (STS) |6.7 (STS) | +|--------|----------------|---------------|---------------|-------------|-------------| +| v1.2.x | No | No | Yes | Yes | Yes | +| v1.1.x | No | No | Yes | No | No | +| v1.0.x | Yes | Yes | No | No | No | +| v0.5.x | Yes | Yes | No | No | No | ### Prism Central -| CAPX | 2022.1.0.2 | pc.2022.6 | -|--------|------------|-----------| -| v1.2.x | No | Yes | -| v1.1.x | No | Yes | -| v1.0.x | Yes | Yes | -| v0.5.x | Yes | Yes | +| CAPX | 2022.1.0.2 | pc.2022.6 |pc.2022.9 |pc.2023.x | +|--------|------------|-----------|----------|----------| +| v1.2.x | No | Yes |Yes |Yes | +| v1.1.x | No | Yes |No |No | +| v1.0.x | Yes | Yes |No |No | +| v0.5.x | Yes | Yes |No |No | + diff --git a/docs/capx/v1.3.x/addons/install_csi_driver.md b/docs/capx/v1.3.x/addons/install_csi_driver.md new file mode 100644 index 00000000..afb4bdc8 --- /dev/null +++ b/docs/capx/v1.3.x/addons/install_csi_driver.md @@ -0,0 +1,215 @@ +# Nutanix CSI Driver installation with CAPX + +The Nutanix CSI driver is fully supported on CAPI/CAPX deployed clusters where all the nodes meet the [Nutanix CSI driver prerequisites](#capi-workload-cluster-prerequisites-for-the-nutanix-csi-driver). + +There are three methods to install the Nutanix CSI driver on a CAPI/CAPX cluster: + +- Helm +- ClusterResourceSet +- CAPX Flavor + +For more information, check the next sections. + +## CAPI Workload cluster prerequisites for the Nutanix CSI Driver + +Kubernetes workers need the following prerequisites to use the Nutanix CSI Drivers: + +- iSCSI initiator package (for Volumes based block storage) +- NFS client package (for Files based storage) + +These packages may already be present in the image you use with your infrastructure provider or you can also rely on your bootstrap provider to install them. More info is available in the [Prerequisites docs](https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_6:csi-csi-plugin-prerequisites-r.html){target=_blank}. + +The package names and installation method will also vary depending on the operating system you plan to use. + +In the example below, `kubeadm` bootstrap provider is used to deploy these packages on top of an Ubuntu 20.04 image. The `kubeadm` bootstrap provider allows defining `preKubeadmCommands` that will be launched before Kubernetes cluster creation. These `preKubeadmCommands` can be defined both in `KubeadmControlPlane` for master nodes and in `KubeadmConfigTemplate` for worker nodes. + +In the example with an Ubuntu 20.04 image, both `KubeadmControlPlane` and `KubeadmConfigTemplate` must be modified as in the example below: + +```yaml +spec: + template: + spec: + # ....... + preKubeadmCommands: + - echo "before kubeadm call" > /var/log/prekubeadm.log + - apt update + - apt install -y nfs-common open-iscsi + - systemctl enable --now iscsid +``` +## Install the Nutanix CSI Driver with Helm + +A recent [Helm](https://helm.sh){target=_blank} version is needed (tested with Helm v3.10.1). + +The example below must be applied on a ready workload cluster. The workload cluster's kubeconfig can be retrieved and used to connect with the following command: + +```shell +clusterctl get kubeconfig $CLUSTER_NAME -n $CLUSTER_NAMESPACE > $CLUSTER_NAME-KUBECONFIG +export KUBECONFIG=$(pwd)/$CLUSTER_NAME-KUBECONFIG +``` + +Once connected to the cluster, follow the [CSI documentation](https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_6:csi-csi-driver-install-t.html){target=_blank}. + +First, install the [nutanix-csi-snapshot](https://github.com/nutanix/helm/tree/master/charts/nutanix-csi-snapshot){target=_blank} chart followed by the [nutanix-csi-storage](https://github.com/nutanix/helm/tree/master/charts/nutanix-csi-storage){target=_blank} chart. + +See an example below: + +```shell +#Add the official Nutanix Helm repo and get the latest update +helm repo add nutanix https://nutanix.github.io/helm/ +helm repo update + +# Install the nutanix-csi-snapshot chart +helm install nutanix-csi-snapshot nutanix/nutanix-csi-snapshot -n ntnx-system --create-namespace + +# Install the nutanix-csi-storage chart +helm install nutanix-storage nutanix/nutanix-csi-storage -n ntnx-system --set createSecret=false +``` + +!!! warning + For correct Nutanix CSI driver deployment, a fully functional CNI deployment must be present. + +## Install the Nutanix CSI Driver with `ClusterResourceSet` + +The `ClusterResourceSet` feature was introduced to automatically apply a set of resources (such as CNI/CSI) defined by administrators to matching created/existing workload clusters. + +### Enabling the `ClusterResourceSet` feature + +At the time of writing, `ClusterResourceSet` is an experimental feature that must be enabled during the initialization of a management cluster with the `EXP_CLUSTER_RESOURCE_SET` feature gate. + +To do this, add `EXP_CLUSTER_RESOURCE_SET: "true"` in the `clusterctl` configuration file or just `export EXP_CLUSTER_RESOURCE_SET=true` before initializing the management cluster with `clusterctl init`. + +If the management cluster is already initialized, the `ClusterResourceSet` can be enabled by changing the configuration of the `capi-controller-manager` deployment in the `capi-system` namespace. + + ```shell + kubectl edit deployment -n capi-system capi-controller-manager + ``` + +Locate the section below: + +```yaml + - args: + - --leader-elect + - --metrics-bind-addr=localhost:8080 + - --feature-gates=MachinePool=false,ClusterResourceSet=true,ClusterTopology=false +``` + +Then replace `ClusterResourceSet=false` with `ClusterResourceSet=true`. + +!!! note + Editing the `deployment` resource will cause Kubernetes to automatically start new versions of the containers with the feature enabled. + + + +### Prepare the Nutanix CSI `ClusterResourceSet` + +#### Create the `ConfigMap` for the CSI Plugin + +First, create a `ConfigMap` that contains a YAML manifest with all resources to install the Nutanix CSI driver. + +Since the Nutanix CSI Driver is provided as a Helm chart, use `helm` to extract it before creating the `ConfigMap`. See an example below: + +```shell +helm repo add nutanix https://nutanix.github.io/helm/ +helm repo update + +kubectl create ns ntnx-system --dry-run=client -o yaml > nutanix-csi-namespace.yaml +helm template nutanix-csi-snapshot nutanix/nutanix-csi-snapshot -n ntnx-system > nutanix-csi-snapshot.yaml +helm template nutanix-csi-snapshot nutanix/nutanix-csi-storage -n ntnx-system > nutanix-csi-storage.yaml + +kubectl create configmap nutanix-csi-crs --from-file=nutanix-csi-namespace.yaml --from-file=nutanix-csi-snapshot.yaml --from-file=nutanix-csi-storage.yaml +``` + +#### Create the `ClusterResourceSet` + +Next, create the `ClusterResourceSet` resource that will map the `ConfigMap` defined above to clusters using a `clusterSelector`. + +The `ClusterResourceSet` needs to be created inside the management cluster. See an example below: + +```yaml +--- +apiVersion: addons.cluster.x-k8s.io/v1alpha3 +kind: ClusterResourceSet +metadata: + name: nutanix-csi-crs +spec: + clusterSelector: + matchLabels: + csi: nutanix + resources: + - kind: ConfigMap + name: nutanix-csi-crs +``` + +The `clusterSelector` field controls how Cluster API will match this `ClusterResourceSet` on one or more workload clusters. In the example scenario, the `matchLabels` approach is being used where the `ClusterResourceSet` will be applied to all workload clusters having the `csi: nutanix` label present. If the label isn't present, the `ClusterResourceSet` won't apply to that workload cluster. + +The `resources` field references the `ConfigMap` created above, which contains the manifests for installing the Nutanix CSI driver. + +#### Assign the `ClusterResourceSet` to a workload cluster + +Assign this `ClusterResourceSet` to the workload cluster by adding the correct label to the `Cluster` resource. + +This can be done before workload cluster creation by editing the output of the `clusterctl generate cluster` command or by modifying an already deployed workload cluster. + +In both cases, `Cluster` resources should look like this: + +```yaml +apiVersion: cluster.x-k8s.io/v1beta1 +kind: Cluster +metadata: + name: workload-cluster-name + namespace: workload-cluster-namespace + labels: + csi: nutanix +# ... +``` + +!!! warning + For correct Nutanix CSI driver deployment, a fully functional CNI deployment must be present. + +## Install the Nutanix CSI Driver with a CAPX flavor + +The CAPX provider can utilize a flavor to automatically deploy the Nutanix CSI using a `ClusterResourceSet`. + +### Prerequisites + +The following requirements must be met: + +- The operating system must meet the [Nutanix CSI OS prerequisites](#capi-workload-cluster-prerequisites-for-the-nutanix-csi-driver). +- The Management cluster must be installed with the [`CLUSTER_RESOURCE_SET` feature gate](#enabling-the-clusterresourceset-feature). + +### Installation + +Specify the `csi` flavor during workload cluster creation. See an example below: + +```shell +clusterctl generate cluster my-cluster -f csi +``` + +Additional environment variables are required: + +- `WEBHOOK_CA`: Base64 encoded CA certificate used to sign the webhook certificate +- `WEBHOOK_CERT`: Base64 certificate for the webhook validation component +- `WEBHOOK_KEY`: Base64 key for the webhook validation component + +The three components referenced above can be automatically created and referenced using [this script](https://github.com/nutanix-cloud-native/cluster-api-provider-nutanix/blob/main/scripts/gen-self-cert.sh){target=_blank}: + +``` +source scripts/gen-self-cert.sh +``` + +The certificate must reference the following names: + +- csi-snapshot-webhook +- csi-snapshot-webhook.ntnx-sytem +- csi-snapshot-webhook.ntnx-sytem.svc + +!!! warning + For correct Nutanix CSI driver deployment, a fully functional CNI deployment must be present. + +## Nutanix CSI Driver Configuration + +After the driver is installed, it must be configured for use by minimally defining a `Secret` and `StorageClass`. + +This can be done manually in the workload clusters or by using a `ClusterResourceSet` in the management cluster as explained above. + +See the Official [CSI Driver documentation](https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_6:CSI-Volume-Driver-v2_6){target=_blank} on the Nutanix Portal for more configuration information. diff --git a/docs/capx/v1.3.x/credential_management.md b/docs/capx/v1.3.x/credential_management.md new file mode 100644 index 00000000..bebbc5a0 --- /dev/null +++ b/docs/capx/v1.3.x/credential_management.md @@ -0,0 +1,93 @@ +# Credential Management +Cluster API Provider Nutanix Cloud Infrastructure (CAPX) interacts with Nutanix Prism Central (PC) APIs to manage the required Kubernetes cluster infrastructure resources. + +PC credentials are required to authenticate to the PC APIs. CAPX currently supports two mechanisms to supply the required credentials: + +- Credentials injected into the CAPX manager deployment +- Workload cluster specific credentials + +## Credentials injected into the CAPX manager deployment +By default, credentials will be injected into the CAPX manager deployment when CAPX is initialized. See the [getting started guide](./getting_started.md) for more information on the initialization. + +Upon initialization a `nutanix-creds` secret will automatically be created in the `capx-system` namespace. This secret will contain the values supplied via the `NUTANIX_USER` and `NUTANIX_PASSWORD` parameters. + +The `nutanix-creds` secret will be used for workload cluster deployment if no other credential is supplied. + +### Example +An example of the automatically created `nutanix-creds` secret can be found below: +```yaml +--- +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: nutanix-creds + namespace: capx-system +stringData: + credentials: | + [ + { + "type": "basic_auth", + "data": { + "prismCentral":{ + "username": "", + "password": "" + }, + "prismElements": null + } + } + ] +``` + +## Workload cluster specific credentials +Users can override the [credentials injected in CAPX manager deployment](#credentials-injected-into-the-capx-manager-deployment) by supplying a credential specific to a workload cluster. The credentials can be supplied by creating a secret in the same namespace as the `NutanixCluster` namespace. + +The secret can be referenced by adding a `credentialRef` inside the `prismCentral` attribute contained in the `NutanixCluster`. +The secret will also be deleted when the `NutanixCluster` is deleted. + +Note: There is a 1:1 relation between the secret and the `NutanixCluster` object. + +### Example +Create a secret in the namespace of the `NutanixCluster`: + +```yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: "" + namespace: "" +stringData: + credentials: | + [ + { + "type": "basic_auth", + "data": { + "prismCentral":{ + "username": "", + "password": "" + }, + "prismElements": null + } + } + ] +``` + +Add a `prismCentral` and corresponding `credentialRef` to the `NutanixCluster`: + +```yaml +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: NutanixCluster +metadata: + name: "" + namespace: "" +spec: + prismCentral: + ... + credentialRef: + name: "" + kind: Secret +... +``` + +See the [NutanixCluster](./types/nutanix_cluster.md) documentation for all supported configuration parameters for the `prismCentral` and `credentialRef` attribute. \ No newline at end of file diff --git a/docs/capx/v1.3.x/experimental/autoscaler.md b/docs/capx/v1.3.x/experimental/autoscaler.md new file mode 100644 index 00000000..2af57213 --- /dev/null +++ b/docs/capx/v1.3.x/experimental/autoscaler.md @@ -0,0 +1,129 @@ +# Using Autoscaler in combination with CAPX + +!!! warning + The scenario and features described on this page are experimental. It's important to note that they have not been fully validated. + +[Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md){target=_blank} can be used in combination with Cluster API to automatically add or remove machines in a cluster. + +Autoscaler can be used in different deployment scenarios. This page will provide an overview of multiple autoscaler deployment scenarios in combination with CAPX. +See the [Testing](#testing) section to see how scale-up/scale-down events can be triggered to validate the autoscaler behaviour. + +More in-depth information on Autoscaler functionality can be found in the [Kubernetes documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md){target=_blank}. + +All Autoscaler configuration parameters can be found [here](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca){target=_blank}. + +## Scenario 1: Management cluster managing an external workload cluster +In this scenario, Autoscaler will be running on a management cluster and it will manage an external workload cluster. See the management cluster managing an external workload cluster section of [Kubernetes documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md#autoscaler-running-in-management-cluster-using-service-account-credentials-with-separate-workload-cluster){target=_blank} for more information. + +### Steps +1. Deploy a management cluster and workload cluster. The [CAPI quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html){target=_blank} can be used as a starting point. + + !!! note + Make sure a CNI is installed in the workload cluster. + +4. Download the example [Autoscaler deployment file](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/examples/deployment.yaml){target=_blank}. +5. Modify the `deployment.yaml` file: + - Change the namespace of all resources to the namespaces of the workload cluster. + - Choose an autoscale image. + - Change the following parameters in the `Deployment` resource: +```YAML + spec: + containers: + name: cluster-autoscaler + command: + - /cluster-autoscaler + args: + - --cloud-provider=clusterapi + - --kubeconfig=/mnt/kubeconfig/kubeconfig.yml + - --clusterapi-cloud-config-authoritative + - -v=1 + volumeMounts: + - mountPath: /mnt/kubeconfig + name: kubeconfig + readOnly: true + ... + volumes: + - name: kubeconfig + secret: + secretName: -kubeconfig + items: + - key: value + path: kubeconfig.yml +``` +7. Apply the `deployment.yaml` file. +```bash +kubectl apply -f deployment.yaml +``` +8. Add the [annotations](#autoscaler-node-group-annotations) to the workload cluster `MachineDeployment` resource. +9. Test Autoscaler. Go to the [Testing](#testing) section. + +## Scenario 2: Autoscaler running on workload cluster +In this scenario, Autoscaler will be deployed [on top of the workload cluster](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md#autoscaler-running-in-a-joined-cluster-using-service-account-credentials){target=_blank} directly. In order for Autoscaler to work, it is required that the workload cluster resources are moved from the management cluster to the workload cluster. + +### Steps +1. Deploy a management cluster and workload cluster. The [CAPI quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html){target=_blank} can be used as a starting point. +2. Get the kubeconfig file for the workload cluster and use this kubeconfig to login to the workload cluster. +```bash +clusterctl get kubeconfig -n /path/to/kubeconfig +``` +3. Install a CNI in the workload cluster. +4. Initialise the CAPX components on top of the workload cluster: +```bash +clusterctl init --infrastructure nutanix +``` +5. Migrate the workload cluster custom resources to the workload cluster. Run following command from the management cluster: +```bash +clusterctl move -n --to-kubeconfig /path/to/kubeconfig +``` +6. Verify if the cluster has been migrated by running following command on the workload cluster: +```bash +kubectl get cluster -A +``` +7. Download the example [autoscaler deployment file](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/examples/deployment.yaml){target=_blank}. +8. Create the Autoscaler namespace: +```bash +kubectl create ns autoscaler +``` +9. Apply the `deployment.yaml` file +```bash +kubectl apply -f deployment.yaml +``` +10. Add the [annotations](#autoscaler-node-group-annotations) to the workload cluster `MachineDeployment` resource. +11. Test Autoscaler. Go to the [Testing](#testing) section. + +## Testing + +1. Deploy an example Kubernetes application. For example, the one used in the [Kubernetes HorizontalPodAutoscaler Walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). +```bash +kubectl apply -f https://k8s.io/examples/application/php-apache.yaml +``` +2. Increase the amount of replicas of the application to trigger a scale-up event: +``` +kubectl scale deployment php-apache --replicas 100 +``` +3. Decrease the amount of replicas of the application again to trigger a scale-down event. + + !!! note + In case of issues check the logs of the Autoscaler pods. + +4. After a while CAPX, will add more machines. Refer to the [Autoscaler configuration parameters](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca){target=_blank} to tweak the behaviour and timeouts. + +## Autoscaler node group annotations +Autoscaler uses following annotations to define the upper and lower boundries of the managed machines: + +| Annotation | Example Value | Description | +|-------------------------------------------------------------|---------------|-----------------------------------------------| +| cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size | 5 | Maximum amount of machines in this node group | +| cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size | 1 | Minimum amount of machines in this node group | + +These annotations must be applied to the `MachineDeployment` resources of a CAPX cluster. + +### Example +```YAML +apiVersion: cluster.x-k8s.io/v1beta1 +kind: MachineDeployment +metadata: + annotations: + cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "5" + cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1" +``` \ No newline at end of file diff --git a/docs/capx/v1.3.x/experimental/capx_multi_pe.md b/docs/capx/v1.3.x/experimental/capx_multi_pe.md new file mode 100644 index 00000000..bd52ccd7 --- /dev/null +++ b/docs/capx/v1.3.x/experimental/capx_multi_pe.md @@ -0,0 +1,30 @@ +# Creating a workload CAPX cluster spanning Prism Element clusters + +!!! warning + The scenario and features described on this page are experimental. It's important to note that they have not been fully validated. + +This page will explain how to deploy CAPX-based Kubernetes clusters where worker nodes are spanning multiple Prism Element (PE) clusters. + +!!! note + All the PE clusters must be managed by the same Prism Central (PC) instance. + +The topology will look like this: + +- One PC managing multiple PE's +- One CAPI management cluster +- One CAPI workload cluster with multiple `MachineDeployment`resources + +Refer to the [CAPI quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html){target=_blank} to get started with CAPX. + +To create workload clusters spanning multiple Prism Element clusters, it is required to create a `MachineDeployment` and `NutanixMachineTemplate` resource for each Prism Element cluster. The Prism Element specific parameters (name/UUID, subnet,...) are referenced in the `NutanixMachineTemplate`. + +## Steps +1. Create a management cluster that has the CAPX infrastructure provider deployed. +2. Create a `cluster.yml` file containing the workload cluster definition. Refer to the steps defined in the [CAPI quickstart guide](https://cluster-api.sigs.k8s.io/user/quick-start.html){target=_blank} to create an example `cluster.yml` file. +3. Add additional `MachineDeployment` and `NutanixMachineTemplate` resources. + + By default there is only one machine template and machine deployment defined. To add nodes residing on another Prism Element cluster, a new `MachineDeployment` and `NutanixMachineTemplate` resource needs to be added to the yaml file. The autogenerated `MachineDeployment` and `NutanixMachineTemplate` resource definitions can be used as a baseline. + + Make sure to modify the `MachineDeployment` and `NutanixMachineTemplate` parameters. + +4. Apply the modified `cluster.yml` file to the management cluster. diff --git a/docs/capx/v1.3.x/experimental/oidc.md b/docs/capx/v1.3.x/experimental/oidc.md new file mode 100644 index 00000000..0c274121 --- /dev/null +++ b/docs/capx/v1.3.x/experimental/oidc.md @@ -0,0 +1,31 @@ +# OIDC integration + +!!! warning + The scenario and features described on this page are experimental. It's important to note that they have not been fully validated. + +Kubernetes allows users to authenticate using various authentication mechanisms. One of these mechanisms is OIDC. Information on how Kubernetes interacts with OIDC providers can be found in the [OpenID Connect Tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens){target=_blank} section of the official Kubernetes documentation. + + +Follow the steps below to configure a CAPX cluster to use an OIDC identity provider. + +## Steps +1. Generate a `cluster.yaml` file with the required CAPX cluster configuration. Refer to the [Getting Started](../getting_started.md){target=_blank} page for more information on how to generate a `cluster.yaml` file. Do not apply the `cluster.yaml` file. +2. Edit the `cluster.yaml` file and search for the `KubeadmControlPlane` resource. +3. Modify/add the `spec.kubeadmConfigSpec.clusterConfiguration.apiServer.extraArgs` attribute and add the required [API server parameters](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server){target=_blank}. See the [example](#example) below. +4. Apply the `cluster.yaml` file +5. Log in with the OIDC provider once the cluster is provisioned + +## Example +```YAML +kind: KubeadmControlPlane +spec: + kubeadmConfigSpec: + clusterConfiguration: + apiServer: + extraArgs: + ... + oidc-client-id: + oidc-issuer-url: + ... +``` + diff --git a/docs/capx/v1.3.x/experimental/proxy.md b/docs/capx/v1.3.x/experimental/proxy.md new file mode 100644 index 00000000..c8f940d4 --- /dev/null +++ b/docs/capx/v1.3.x/experimental/proxy.md @@ -0,0 +1,62 @@ +# Proxy configuration + +!!! warning + The scenario and features described on this page are experimental. It's important to note that they have not been fully validated. + +CAPX can be configured to use a proxy to connect to external networks. This proxy configuration needs to be applied to control plane and worker nodes. + +Follow the steps below to configure a CAPX cluster to use a proxy. + +## Steps +1. Generate a `cluster.yaml` file with the required CAPX cluster configuration. Refer to the [Getting Started](../getting_started.md){target=_blank} page for more information on how to generate a `cluster.yaml` file. Do not apply the `cluster.yaml` file. +2. Edit the `cluster.yaml` file and modify the following resources as shown in the [example](#example) below to add the proxy configuration. + 1. `KubeadmControlPlane`: + * Add the proxy configuration to the `spec.kubeadmConfigSpec.files` list. Do not modify other items in the list. + * Add `systemctl` commands to apply the proxy config in `spec.kubeadmConfigSpec.preKubeadmCommands`. Do not modify other items in the list. + 2. `KubeadmConfigTemplate`: + * Add the proxy configuration to the `spec.template.spec.files` list. Do not modify other items in the list. + * Add `systemctl` commands to apply the proxy config in `spec.template.spec.preKubeadmCommands`. Do not modify other items in the list. +4. Apply the `cluster.yaml` file + +## Example + +```YAML +--- +# controlplane proxy settings +kind: KubeadmControlPlane +spec: + kubeadmConfigSpec: + files: + - content: | + [Service] + Environment="HTTP_PROXY=" + Environment="HTTPS_PROXY=" + Environment="NO_PROXY=" + owner: root:root + path: /etc/systemd/system/containerd.service.d/http-proxy.conf + ... + preKubeadmCommands: + - sudo systemctl daemon-reload + - sudo systemctl restart containerd + ... +--- +# worker proxy settings +kind: KubeadmConfigTemplate +spec: + template: + spec: + files: + - content: | + [Service] + Environment="HTTP_PROXY=" + Environment="HTTPS_PROXY=" + Environment="NO_PROXY=" + owner: root:root + path: /etc/systemd/system/containerd.service.d/http-proxy.conf + ... + preKubeadmCommands: + - sudo systemctl daemon-reload + - sudo systemctl restart containerd + ... +``` + diff --git a/docs/capx/v1.3.x/experimental/registry_mirror.md b/docs/capx/v1.3.x/experimental/registry_mirror.md new file mode 100644 index 00000000..307a9425 --- /dev/null +++ b/docs/capx/v1.3.x/experimental/registry_mirror.md @@ -0,0 +1,96 @@ +# Registry Mirror configuration + +!!! warning + The scenario and features described on this page are experimental. It's important to note that they have not been fully validated. + +CAPX can be configured to use a private registry to act as a mirror of an external public registry. This registry mirror configuration needs to be applied to control plane and worker nodes. + +Follow the steps below to configure a CAPX cluster to use a registry mirror. + +## Steps +1. Generate a `cluster.yaml` file with the required CAPX cluster configuration. Refer to the [Getting Started](../getting_started.md){target=_blank} page for more information on how to generate a `cluster.yaml` file. Do not apply the `cluster.yaml` file. +2. Edit the `cluster.yaml` file and modify the following resources as shown in the [example](#example) below to add the proxy configuration. + 1. `KubeadmControlPlane`: + * Add the registry mirror configuration to the `spec.kubeadmConfigSpec.files` list. Do not modify other items in the list. + * Update `/etc/containerd/config.toml` commands to apply the registry mirror config in `spec.kubeadmConfigSpec.preKubeadmCommands`. Do not modify other items in the list. + 2. `KubeadmConfigTemplate`: + * Add the registry mirror configuration to the `spec.template.spec.files` list. Do not modify other items in the list. + * Update `/etc/containerd/config.toml` commands to apply the registry mirror config in `spec.template.spec.preKubeadmCommands`. Do not modify other items in the list. +4. Apply the `cluster.yaml` file + +## Example + +This example will configure a registry mirror for the following namespace: + +* registry.k8s.io +* ghcr.io +* quay.io + +and redirect them to corresponding projects of the `` registry. + +```YAML +--- +# controlplane proxy settings +kind: KubeadmControlPlane +spec: + kubeadmConfigSpec: + files: + - content: | + [host."https:///v2/registry.k8s.io"] + capabilities = ["pull", "resolve"] + skip_verify = false + override_path = true + owner: root:root + path: /etc/containerd/certs.d/registry.k8s.io/hosts.toml + - content: | + [host."https:///v2/ghcr.io"] + capabilities = ["pull", "resolve"] + skip_verify = false + override_path = true + owner: root:root + path: /etc/containerd/certs.d/ghcr.io/hosts.toml + - content: | + [host."https:///v2/quay.io"] + capabilities = ["pull", "resolve"] + skip_verify = false + override_path = true + owner: root:root + path: /etc/containerd/certs.d/quay.io/hosts.toml + ... + preKubeadmCommands: + - echo '\n[plugins."io.containerd.grpc.v1.cri".registry]\n config_path = "/etc/containerd/certs.d"' >> /etc/containerd/config.toml + ... +--- +# worker proxy settings +kind: KubeadmConfigTemplate +spec: + template: + spec: + files: + - content: | + [host."https:///v2/registry.k8s.io"] + capabilities = ["pull", "resolve"] + skip_verify = false + override_path = true + owner: root:root + path: /etc/containerd/certs.d/registry.k8s.io/hosts.toml + - content: | + [host."https:///v2/ghcr.io"] + capabilities = ["pull", "resolve"] + skip_verify = false + override_path = true + owner: root:root + path: /etc/containerd/certs.d/ghcr.io/hosts.toml + - content: | + [host."https:///v2/quay.io"] + capabilities = ["pull", "resolve"] + skip_verify = false + override_path = true + owner: root:root + path: /etc/containerd/certs.d/quay.io/hosts.toml + ... + preKubeadmCommands: + - echo '\n[plugins."io.containerd.grpc.v1.cri".registry]\n config_path = "/etc/containerd/certs.d"' >> /etc/containerd/config.toml + ... +``` + diff --git a/docs/capx/v1.3.x/experimental/vpc.md b/docs/capx/v1.3.x/experimental/vpc.md new file mode 100644 index 00000000..3513e47e --- /dev/null +++ b/docs/capx/v1.3.x/experimental/vpc.md @@ -0,0 +1,40 @@ +# Creating a workload CAPX cluster in a Nutanix Flow VPC + +!!! warning + The scenario and features described on this page are experimental. It's important to note that they have not been fully validated. + +!!! note + Nutanix Flow VPCs are only validated with CAPX 1.1.3+ + +[Nutanix Flow Virtual Networking](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide-vpc_2022_9:Nutanix-Flow-Virtual-Networking-Guide-vpc_2022_9){target=_blank} allows users to create Virtual Private Clouds (VPCs) with Overlay networking. +The steps below will illustrate how a CAPX cluster can be deployed inside an overlay subnet (NAT) inside a VPC while the management cluster resides outside of the VPC. + + +## Steps +1. [Request a floating IP](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Networking-Guide:ear-flow-nw-request-floating-ip-pc-t.html){target=_blank} +2. Link the floating IP to an internal IP address inside the overlay subnet that will be used to deploy the CAPX cluster. This address will be assigned to the CAPX loadbalancer. To prevent IP conflicts, make sure the IP address is not part of the IP-pool defined in the subnet. +3. Generate a `cluster.yaml` file with the required CAPX cluster configuration where the `CONTROL_PLANE_ENDPOINT_IP` is set to the floating IP requested in the first step. Refer to the [Getting Started](../getting_started.md){target=_blank} page for more information on how to generate a `cluster.yaml` file. Do not apply the `cluster.yaml` file. +4. Edit the `cluster.yaml` file and search for the `KubeadmControlPlane` resource. +5. Modify the `spec.kubeadmConfigSpec.files.*.content` attribute and change the `kube-vip` definition similar to the [example](#example) below. +6. Apply the `cluster.yaml` file. +7. When the CAPX workload cluster is deployed, it will be reachable via the floating IP. + +## Example +```YAML +kind: KubeadmControlPlane +spec: + kubeadmConfigSpec: + files: + - content: | + apiVersion: v1 + kind: Pod + metadata: + name: kube-vip + namespace: kube-system + spec: + containers: + - env: + - name: address + value: "" +``` + diff --git a/docs/capx/v1.3.x/getting_started.md b/docs/capx/v1.3.x/getting_started.md new file mode 100644 index 00000000..44923b86 --- /dev/null +++ b/docs/capx/v1.3.x/getting_started.md @@ -0,0 +1,154 @@ +# Getting Started + +This is a guide on getting started with Cluster API Provider Nutanix Cloud Infrastructure (CAPX). To learn more about cluster API in more depth, check out the [Cluster API book](https://cluster-api.sigs.k8s.io/){target=_blank}. + +For more information on how install the Nutanix CSI Driver on a CAPX cluster, visit [Nutanix CSI Driver installation with CAPX](./addons/install_csi_driver.md). + +For more information on how CAPX handles credentials, visit [Credential Management](./credential_management.md). + +For more information on the port requirements for CAPX, visit [Port Requirements](./port_requirements.md). + +!!! note + [Nutanix Cloud Controller Manager (CCM)](../../ccm/latest/overview.md) is a mandatory component starting from CAPX v1.3.0. Ensure all CAPX-managed Kubernetes clusters are configured to use Nutanix CCM before upgrading to v1.3.0 or later. See [CAPX v1.3.x Upgrade Procedure](./tasks/capx_v13x_upgrade_procedure.md). + +## Production Workflow + +### Build OS image for NutanixMachineTemplate resource +Cluster API Provider Nutanix Cloud Infrastructure (CAPX) uses the [Image Builder](https://image-builder.sigs.k8s.io/){target=_blank} project to build OS images used for the Nutanix machines. + +Follow the steps detailed in [Building CAPI Images for Nutanix Cloud Platform (NCP)](https://image-builder.sigs.k8s.io/capi/providers/nutanix.html#building-capi-images-for-nutanix-cloud-platform-ncp){target=_blank} to use Image Builder on the Nutanix Cloud Platform. + +For a list of operating systems visit the OS image [Configuration](https://image-builder.sigs.k8s.io/capi/providers/nutanix.html#configuration){target=_blank} page. + +### Prerequisites for using Cluster API Provider Nutanix Cloud Infrastructure +The [Cluster API installation](https://cluster-api.sigs.k8s.io/user/quick-start.html#installation){target=_blank} section provides an overview of all required prerequisites: + +- [Common Prerequisites](https://cluster-api.sigs.k8s.io/user/quick-start.html#common-prerequisites){target=_blank} +- [Install and/or configure a Kubernetes cluster](https://cluster-api.sigs.k8s.io/user/quick-start.html#install-andor-configure-a-kubernetes-cluster){target=_blank} +- [Install clusterctl](https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl){target=_blank} +- (Optional) [Enabling Feature Gates](https://cluster-api.sigs.k8s.io/user/quick-start.html#enabling-feature-gates){target=_blank} + +Make sure these prerequisites have been met before moving to the [Configure and Install Cluster API Provider Nutanix Cloud Infrastructure](#configure-and-install-cluster-api-provider-nutanix-cloud-infrastructure) step. + + +### Configure and Install Cluster API Provider Nutanix Cloud Infrastructure +To initialize Cluster API Provider Nutanix Cloud Infrastructure, `clusterctl` requires the following variables, which should be set in either `~/.cluster-api/clusterctl.yaml` or as environment variables. +``` +NUTANIX_ENDPOINT: "" # IP or FQDN of Prism Central +NUTANIX_USER: "" # Prism Central user +NUTANIX_PASSWORD: "" # Prism Central password +NUTANIX_INSECURE: false # or true + +KUBERNETES_VERSION: "v1.22.9" +WORKER_MACHINE_COUNT: 3 +NUTANIX_SSH_AUTHORIZED_KEY: "" + +NUTANIX_PRISM_ELEMENT_CLUSTER_NAME: "" +NUTANIX_MACHINE_TEMPLATE_IMAGE_NAME: "" +NUTANIX_SUBNET_NAME: "" +``` + +You can also see the required list of variables by running the following: +``` +clusterctl generate cluster mycluster -i nutanix --list-variables +Required Variables: + - CONTROL_PLANE_ENDPOINT_IP + - KUBERNETES_VERSION + - NUTANIX_ENDPOINT + - NUTANIX_MACHINE_TEMPLATE_IMAGE_NAME + - NUTANIX_PASSWORD + - NUTANIX_PRISM_ELEMENT_CLUSTER_NAME + - NUTANIX_SSH_AUTHORIZED_KEY + - NUTANIX_SUBNET_NAME + - NUTANIX_USER + +Optional Variables: + - CONTROL_PLANE_ENDPOINT_PORT (defaults to "6443") + - CONTROL_PLANE_MACHINE_COUNT (defaults to 1) + - KUBEVIP_LB_ENABLE (defaults to "false") + - KUBEVIP_SVC_ENABLE (defaults to "false") + - NAMESPACE (defaults to current Namespace in the KubeConfig file) + - NUTANIX_INSECURE (defaults to "false") + - NUTANIX_MACHINE_BOOT_TYPE (defaults to "legacy") + - NUTANIX_MACHINE_MEMORY_SIZE (defaults to "4Gi") + - NUTANIX_MACHINE_VCPU_PER_SOCKET (defaults to "1") + - NUTANIX_MACHINE_VCPU_SOCKET (defaults to "2") + - NUTANIX_PORT (defaults to "9440") + - NUTANIX_SYSTEMDISK_SIZE (defaults to "40Gi") + - WORKER_MACHINE_COUNT (defaults to 0) +``` + +!!! note + To prevent duplicate IP assignments, it is required to assign an IP-address to the `CONTROL_PLANE_ENDPOINT_IP` variable that is not part of the Nutanix IPAM or DHCP range assigned to the subnet of the CAPX cluster. + +Now you can instantiate Cluster API with the following: +``` +clusterctl init -i nutanix +``` + +### Deploy a workload cluster on Nutanix Cloud Infrastructure +``` +export TEST_CLUSTER_NAME=mytestcluster1 +export TEST_NAMESPACE=mytestnamespace +CONTROL_PLANE_ENDPOINT_IP=x.x.x.x clusterctl generate cluster ${TEST_CLUSTER_NAME} \ + -i nutanix \ + --target-namespace ${TEST_NAMESPACE} \ + --kubernetes-version v1.22.9 \ + --control-plane-machine-count 1 \ + --worker-machine-count 3 > ./cluster.yaml +kubectl create ns ${TEST_NAMESPACE} +kubectl apply -f ./cluster.yaml -n ${TEST_NAMESPACE} +``` +To customize the configuration of the default `cluster.yaml` file generated by CAPX, visit the [NutanixCluster](./types/nutanix_cluster.md) and [NutanixMachineTemplate](./types/nutanix_machine_template.md) documentation. + +### Access a workload cluster +To access resources on the cluster, you can get the kubeconfig with the following: +``` +clusterctl get kubeconfig ${TEST_CLUSTER_NAME} -n ${TEST_NAMESPACE} > ${TEST_CLUSTER_NAME}.kubeconfig +kubectl --kubeconfig ./${TEST_CLUSTER_NAME}.kubeconfig get nodes +``` + +### Install CNI on workload a cluster + +You must deploy a Container Network Interface (CNI) based pod network add-on so that your pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed. + +!!! note + Take care that your pod network must not overlap with any of the host networks. You are likely to see problems if there is any overlap. If you find a collision between your network plugin's preferred pod network and some of your host networks, you must choose a suitable alternative CIDR block to use instead. It can be configured inside the `cluster.yaml` generated by `clusterctl generate cluster` before applying it. + +Several external projects provide Kubernetes pod networks using CNI, some of which also support [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/){target=_blank}. + +See a list of add-ons that implement the [Kubernetes networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model){target=_blank}. At time of writing, the most common are [Calico](https://www.tigera.io/project-calico/){target=_blank} and [Cilium](https://cilium.io){target=_blank}. + +Follow the specific install guide for your selected CNI and install only one pod network per cluster. + +Once a pod network has been installed, you can confirm that it is working by checking that the CoreDNS pod is running in the output of `kubectl get pods --all-namespaces`. + + +### Kube-vip settings + +Kube-vip is a true load balancing solution for the Kubernetes control plane. It distributes API requests across control plane nodes. It also has the capability to provide load balancing for Kubernetes services. + +You can tweak kube-vip settings by using the following properties: + +- `KUBEVIP_LB_ENABLE` + +This setting allows control plane load balancing using IPVS. See +[Control Plane Load-Balancing documentation](https://kube-vip.io/docs/about/architecture/#control-plane-load-balancing){target=_blank} for further information. + +- `KUBEVIP_SVC_ENABLE` + +This setting enables a service of type LoadBalancer. See +[Kubernetes Service Load Balancing documentation](https://kube-vip.io/docs/about/architecture/#kubernetes-service-load-balancing){target=_blank} for further information. + +- `KUBEVIP_SVC_ELECTION` + +This setting enables Load Balancing of Load Balancers. See [Load Balancing Load Balancers](https://kube-vip.io/docs/usage/kubernetes-services/#load-balancing-load-balancers-when-using-arp-mode-yes-you-read-that-correctly-kube-vip-v050){target=_blank} for further information. + +### Delete a workload cluster +To remove a workload cluster from your management cluster, remove the cluster object and the provider will clean-up all resources. + +``` +kubectl delete cluster ${TEST_CLUSTER_NAME} -n ${TEST_NAMESPACE} +``` +!!! note + Deleting the entire cluster template with `kubectl delete -f ./cluster.yaml` may lead to pending resources requiring manual cleanup. diff --git a/docs/capx/v1.3.x/pc_certificates.md b/docs/capx/v1.3.x/pc_certificates.md new file mode 100644 index 00000000..f3fe1699 --- /dev/null +++ b/docs/capx/v1.3.x/pc_certificates.md @@ -0,0 +1,149 @@ +# Certificate Trust + +CAPX invokes Prism Central APIs using the HTTPS protocol. CAPX has different methods to handle the trust of the Prism Central certificates: + +- Enable certificate verification (default) +- Configure an additional trust bundle +- Disable certificate verification + +See the respective sections below for more information. + +!!! note + For more information about replacing Prism Central certificates, see the [Nutanix AOS Security Guide](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_5:mul-security-ssl-certificate-pc-t.html){target=_blank}. + +## Enable certificate verification (default) +By default CAPX will perform certificate verification when invoking Prism Central API calls. This requires Prism Central to be configured with a publicly trusted certificate authority. +No additional configuration is required in CAPX. + +## Configure an additional trust bundle +CAPX allows users to configure an additional trust bundle. This will allow CAPX to verify certificates that are not issued by a publicy trusted certificate authority. + +To configure an additional trust bundle, the `NUTANIX_ADDITIONAL_TRUST_BUNDLE` environment variable needs to be set. The value of the `NUTANIX_ADDITIONAL_TRUST_BUNDLE` environment variable contains the trust bundle (PEM format) in base64 encoded format. See the [Configuring the trust bundle environment variable](#configuring-the-trust-bundle-environment-variable) section for more information. + +It is also possible to configure the additional trust bundle manually by creating a custom `cluster-template`. See the [Configuring the additional trust bundle manually](#configuring-the-additional-trust-bundle-manually) section for more information + +The `NUTANIX_ADDITIONAL_TRUST_BUNDLE` environment variable can be set when initializing the CAPX provider or when creating a workload cluster. If the `NUTANIX_ADDITIONAL_TRUST_BUNDLE` is configured when the CAPX provider is initialized, the additional trust bundle will be used for every CAPX workload cluster. If it is only configured when creating a workload cluster, it will only be applicable for that specific workload cluster. + + +### Configuring the trust bundle environment variable + +Create a PEM encoded file containing the root certificate and all intermediate certificates. Example: +``` +$ cat cert.crt +-----BEGIN CERTIFICATE----- + +-----END CERTIFICATE----- +-----BEGIN CERTIFICATE----- + +-----END CERTIFICATE----- +``` + +Use a `base64` tool to encode these contents in base64. The command below will provide a `base64` string. +``` +$ cat cert.crt | base64 + +``` +!!! note + Make sure the `base64` string does not contain any newlines (`\n`). If the output string contains newlines, remove them manually or check the manual of the `base64` tool on how to generate a `base64` string without newlines. + +Use the `base64` string as value for the `NUTANIX_ADDITIONAL_TRUST_BUNDLE` environment variable. +``` +$ export NUTANIX_ADDITIONAL_TRUST_BUNDLE="" +``` + +### Configuring the additional trust bundle manually + +To configure the additional trust bundle manually without using the `NUTANIX_ADDITIONAL_TRUST_BUNDLE` environment variable present in the default `cluster-template` files, it is required to: + +- Create a `ConfigMap` containing the additional trust bundle. +- Configure the `prismCentral.additionalTrustBundle` object in the `NutanixCluster` spec. + +#### Creating the additional trust bundle ConfigMap + +CAPX supports two different formats for the ConfigMap containing the additional trust bundle. The first one is to add the additional trust bundle as a multi-line string in the `ConfigMap`, the second option is to add the trust bundle in `base64` encoded format. See the examples below. + +Multi-line string example: +```YAML +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: user-ca-bundle + namespace: ${NAMESPACE} +data: + ca.crt: | + -----BEGIN CERTIFICATE----- + + -----END CERTIFICATE----- + -----BEGIN CERTIFICATE----- + + -----END CERTIFICATE----- +``` + +`base64` example: + +```YAML +apiVersion: v1 +kind: ConfigMap +metadata: + name: user-ca-bundle + namespace: ${NAMESPACE} +binaryData: + ca.crt: +``` + +!!! note + The `base64` string needs to be added as `binaryData`. + + +#### Configuring the NutanixCluster spec + +When the additional trust bundle `ConfigMap` is created, it needs to be referenced in the `NutanixCluster` spec. Add the `prismCentral.additionalTrustBundle` object in the `NutanixCluster` spec as shown below. Make sure the correct additional trust bundle `ConfigMap` is referenced. + +```YAML +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: NutanixCluster +metadata: + name: ${CLUSTER_NAME} + namespace: ${NAMESPACE} +spec: + ... + prismCentral: + ... + additionalTrustBundle: + kind: ConfigMap + name: user-ca-bundle + insecure: false +``` + +!!! note + the default value of `prismCentral.insecure` attribute is `false`. It can be omitted when an additional trust bundle is configured. + + If `prismCentral.insecure` attribute is set to `true`, all certificate verification will be disabled. + + +## Disable certificate verification + +!!! note + Disabling certificate verification is not recommended for production purposes and should only be used for testing. + + +Certificate verification can be disabled by setting the `prismCentral.insecure` attribute to `true` in the `NutanixCluster` spec. Certificate verification will be disabled even if an additional trust bundle is configured. + +Disabled certificate verification example: + +```YAML +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: NutanixCluster +metadata: + name: ${CLUSTER_NAME} + namespace: ${NAMESPACE} +spec: + controlPlaneEndpoint: + host: ${CONTROL_PLANE_ENDPOINT_IP} + port: ${CONTROL_PLANE_ENDPOINT_PORT=6443} + prismCentral: + ... + insecure: true + ... +``` \ No newline at end of file diff --git a/docs/capx/v1.3.x/port_requirements.md b/docs/capx/v1.3.x/port_requirements.md new file mode 100644 index 00000000..af182abb --- /dev/null +++ b/docs/capx/v1.3.x/port_requirements.md @@ -0,0 +1,19 @@ +# Port Requirements + +CAPX uses the ports documented below to create workload clusters. + +!!! note + This page only documents the ports specifically required by CAPX and does not provide the full overview of all ports required in the CAPI framework. + +## Management cluster + +| Source | Destination | Protocol | Port | Description | +|--------------------|---------------------|----------|------|--------------------------------------------------------------------------------------------------| +| Management cluster | External Registries | TCP | 443 | Pull container images from [CAPX public registries](#public-registries-utilized-when-using-capx) | +| Management cluster | Prism Central | TCP | 9440 | Management cluster communication to Prism Central | + +## Public registries utilized when using CAPX + +| Registry name | +|---------------| +| ghcr.io | diff --git a/docs/capx/v1.3.x/tasks/capx_v13x_upgrade_procedure.md b/docs/capx/v1.3.x/tasks/capx_v13x_upgrade_procedure.md new file mode 100644 index 00000000..80e70069 --- /dev/null +++ b/docs/capx/v1.3.x/tasks/capx_v13x_upgrade_procedure.md @@ -0,0 +1,80 @@ +# CAPX v1.3.x Upgrade Procedure + +Starting from CAPX v1.3.0, it is required for all CAPX-managed Kubernetes clusters to use the Nutanix Cloud Controller Manager (CCM). + +Before upgrading CAPX instances to v1.3.0 or later, it is required to follow the [steps](#steps) detailed below for each of the CAPX-managed Kubernetes clusters that don't use Nutanix CCM. + + +## Steps + +This procedure uses [Cluster Resource Set (CRS)](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set){target=_blank} to install Nutanix CCM but it can also be installed using the [Nutanix CCM Helm chart](https://artifacthub.io/packages/helm/nutanix/nutanix-cloud-provider){target=_blank}. + +Perform following steps for each of the CAPX-managed Kubernetes clusters that are not configured to use Nutanix CCM: + +1. Add the `cloud-provider: external` configuration in the `KubeadmConfigTemplate` resources: + ```YAML + apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 + kind: KubeadmConfigTemplate + spec: + template: + spec: + joinConfiguration: + nodeRegistration: + kubeletExtraArgs: + cloud-provider: external + ``` +2. Add the `cloud-provider: external` configuration in the `KubeadmControlPlane` resource: +```YAML +--- +apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 +kind: KubeadmConfigTemplate +spec: + template: + spec: + joinConfiguration: + nodeRegistration: + kubeletExtraArgs: + cloud-provider: external +--- +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 +kind: KubeadmControlPlane +spec: + kubeadmConfigSpec: + clusterConfiguration: + apiServer: + extraArgs: + cloud-provider: external + controllerManager: + extraArgs: + cloud-provider: external + initConfiguration: + nodeRegistration: + kubeletExtraArgs: + cloud-provider: external + joinConfiguration: + nodeRegistration: + kubeletExtraArgs: + cloud-provider: external +``` +3. Add the Nutanix CCM CRS resources: + + - [nutanix-ccm-crs.yaml](https://github.com/nutanix-cloud-native/cluster-api-provider-nutanix/blob/v1.3.0/templates/base/nutanix-ccm-crs.yaml){target=_blank} + - [nutanix-ccm-secret.yaml](https://github.com/nutanix-cloud-native/cluster-api-provider-nutanix/blob/v1.3.0/templates/base/nutanix-ccm-secret.yaml) + - [nutanix-ccm.yaml](https://github.com/nutanix-cloud-native/cluster-api-provider-nutanix/blob/v1.3.0/templates/base/nutanix-ccm.yaml) + + Make sure to update each of the variables before applying the `YAML` files. + +4. Add the `ccm: nutanix` label to the `Cluster` resource: + ```YAML + apiVersion: cluster.x-k8s.io/v1beta1 + kind: Cluster + metadata: + labels: + ccm: nutanix + ``` +5. Verify if the Nutanix CCM pod is up and running: +``` +kubectl get pod -A -l k8s-app=nutanix-cloud-controller-manager +``` +6. Trigger a new rollout of the Kubernetes nodes by performing a Kubernetes upgrade or by using `clusterctl alpha rollout restart`. See the [clusterctl alpha rollout](https://cluster-api.sigs.k8s.io/clusterctl/commands/alpha-rollout#restart){target=_blank} for more information. +7. Upgrade CAPX to v1.3.0 by following the [clusterctl upgrade](https://cluster-api.sigs.k8s.io/clusterctl/commands/upgrade.html?highlight=clusterctl%20upgrade%20pla#clusterctl-upgrade){target=_blank} documentation \ No newline at end of file diff --git a/docs/capx/v1.3.x/tasks/modify_machine_configuration.md b/docs/capx/v1.3.x/tasks/modify_machine_configuration.md new file mode 100644 index 00000000..04a43a95 --- /dev/null +++ b/docs/capx/v1.3.x/tasks/modify_machine_configuration.md @@ -0,0 +1,11 @@ +# Modifying Machine Configurations + +Since all attributes of the `NutanixMachineTemplate` resources are immutable, follow the [Updating Infrastructure Machine Templates](https://cluster-api.sigs.k8s.io/tasks/updating-machine-templates.html?highlight=machine%20template#updating-infrastructure-machine-templates){target=_blank} procedure to modify the configuration of machines in an existing CAPX cluster. +See the [NutanixMachineTemplate](../types/nutanix_machine_template.md) documentation for all supported configuration parameters. + +!!! note + Manually modifying existing and linked `NutanixMachineTemplate` resources will not trigger a rolling update of the machines. + +!!! note + Do not modify the virtual machine configuration of CAPX cluster nodes manually in Prism/Prism Central. + CAPX will not automatically revert the configuration change but performing scale-up/scale-down/upgrade operations will override manual modifications. Only use the `Updating Infrastructure Machine` procedure referenced above to perform configuration changes. \ No newline at end of file diff --git a/docs/capx/v1.3.x/troubleshooting.md b/docs/capx/v1.3.x/troubleshooting.md new file mode 100644 index 00000000..c023d13e --- /dev/null +++ b/docs/capx/v1.3.x/troubleshooting.md @@ -0,0 +1,13 @@ +# Troubleshooting + +## Clusterctl failed with GitHub rate limit error + +By design Clusterctl fetches artifacts from repositories hosted on GitHub, this operation is subject to [GitHub API rate limits](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting){target=_blank}. + +While this is generally okay for the majority of users, there is still a chance that some users (especially developers or CI tools) hit this limit: + +``` +Error: failed to get repository client for the XXX with name YYY: error creating the GitHub repository client: failed to get GitHub latest version: failed to get the list of versions: rate limit for github api has been reached. Please wait one hour or get a personal API tokens a assign it to the GITHUB_TOKEN environment variable +``` + +As explained in the error message, you can increase your API rate limit by [creating a GitHub personal token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token){target=_blank} and setting a `GITHUB_TOKEN` environment variable using the token. diff --git a/docs/capx/v1.3.x/types/nutanix_cluster.md b/docs/capx/v1.3.x/types/nutanix_cluster.md new file mode 100644 index 00000000..09325cab --- /dev/null +++ b/docs/capx/v1.3.x/types/nutanix_cluster.md @@ -0,0 +1,64 @@ +# NutanixCluster + +The `NutanixCluster` resource defines the configuration of a CAPX Kubernetes cluster. + +Example of a `NutanixCluster` resource: + +```YAML +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: NutanixCluster +metadata: + name: ${CLUSTER_NAME} + namespace: ${NAMESPACE} +spec: + controlPlaneEndpoint: + host: ${CONTROL_PLANE_ENDPOINT_IP} + port: ${CONTROL_PLANE_ENDPOINT_PORT=6443} + prismCentral: + address: ${NUTANIX_ENDPOINT} + additionalTrustBundle: + kind: ConfigMap + name: user-ca-bundle + credentialRef: + kind: Secret + name: ${CLUSTER_NAME} + insecure: ${NUTANIX_INSECURE=false} + port: ${NUTANIX_PORT=9440} +``` + +## NutanixCluster spec +The table below provides an overview of the supported parameters of the `spec` attribute of a `NutanixCluster` resource. + +### Configuration parameters + +| Key |Type |Description | +|--------------------------------------------|------|----------------------------------------------------------------------------------| +|controlPlaneEndpoint |object|Defines the host IP and port of the CAPX Kubernetes cluster. | +|controlPlaneEndpoint.host |string|Host IP to be assigned to the CAPX Kubernetes cluster. | +|controlPlaneEndpoint.port |int |Port of the CAPX Kubernetes cluster. Default: `6443` | +|prismCentral |object|(Optional) Prism Central endpoint definition. | +|prismCentral.address |string|IP/FQDN of Prism Central. | +|prismCentral.port |int |Port of Prism Central. Default: `9440` | +|prismCentral.insecure |bool |Disable Prism Central certificate checking. Default: `false` | +|prismCentral.credentialRef |object|Reference to credentials used for Prism Central connection. | +|prismCentral.credentialRef.kind |string|Kind of the credentialRef. Allowed value: `Secret` | +|prismCentral.credentialRef.name |string|Name of the secret containing the Prism Central credentials. | +|prismCentral.credentialRef.namespace |string|(Optional) Namespace of the secret containing the Prism Central credentials. | +|prismCentral.additionalTrustBundle |object|Reference to the certificate trust bundle used for Prism Central connection. | +|prismCentral.additionalTrustBundle.kind |string|Kind of the additionalTrustBundle. Allowed value: `ConfigMap` | +|prismCentral.additionalTrustBundle.name |string|Name of the `ConfigMap` containing the Prism Central trust bundle. | +|prismCentral.additionalTrustBundle.namespace|string|(Optional) Namespace of the `ConfigMap` containing the Prism Central trust bundle.| +|failureDomains |list |(Optional) Failure domains for the Kubernetes nodes | +|failureDomains.[].name |string|Name of the failure domain | +|failureDomains.[].cluster |object|Reference (name or uuid) to the Prism Element cluster. Name or UUID can be passed | +|failureDomains.[].cluster.type |string|Type to identify the Prism Element cluster. Allowed values: `name` and `uuid` | +|failureDomains.[].cluster.name |string|Name of the Prism Element cluster. | +|failureDomains.[].cluster.uuid |string|UUID of the Prism Element cluster. | +|failureDomains.[].subnets |list |(Optional) Reference (name or uuid) to the subnets to be assigned to the VMs. | +|failureDomains.[].subnets.[].type |string|Type to identify the subnet. Allowed values: `name` and `uuid` | +|failureDomains.[].subnets.[].name |string|Name of the subnet. | +|failureDomains.[].subnets.[].uuid |string|UUID of the subnet. | +|failureDomains.[].controlPlane |bool |Indicates if a failure domain is suited for control plane nodes + +!!! note + To prevent duplicate IP assignments, it is required to assign an IP-address to the `controlPlaneEndpoint.host` variable that is not part of the Nutanix IPAM or DHCP range assigned to the subnet of the CAPX cluster. \ No newline at end of file diff --git a/docs/capx/v1.3.x/types/nutanix_machine_template.md b/docs/capx/v1.3.x/types/nutanix_machine_template.md new file mode 100644 index 00000000..516d1eea --- /dev/null +++ b/docs/capx/v1.3.x/types/nutanix_machine_template.md @@ -0,0 +1,84 @@ +# NutanixMachineTemplate +The `NutanixMachineTemplate` resource defines the configuration of a CAPX Kubernetes VM. + +Example of a `NutanixMachineTemplate` resource. + +```YAML +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 +kind: NutanixMachineTemplate +metadata: + name: "${CLUSTER_NAME}-mt-0" + namespace: "${NAMESPACE}" +spec: + template: + spec: + providerID: "nutanix://${CLUSTER_NAME}-m1" + # Supported options for boot type: legacy and uefi + # Defaults to legacy if not set + bootType: ${NUTANIX_MACHINE_BOOT_TYPE=legacy} + vcpusPerSocket: ${NUTANIX_MACHINE_VCPU_PER_SOCKET=1} + vcpuSockets: ${NUTANIX_MACHINE_VCPU_SOCKET=2} + memorySize: "${NUTANIX_MACHINE_MEMORY_SIZE=4Gi}" + systemDiskSize: "${NUTANIX_SYSTEMDISK_SIZE=40Gi}" + image: + type: name + name: "${NUTANIX_MACHINE_TEMPLATE_IMAGE_NAME}" + cluster: + type: name + name: "${NUTANIX_PRISM_ELEMENT_CLUSTER_NAME}" + subnet: + - type: name + name: "${NUTANIX_SUBNET_NAME}" + # Adds additional categories to the virtual machines. + # Note: Categories must already be present in Prism Central + # additionalCategories: + # - key: AppType + # value: Kubernetes + # Adds the cluster virtual machines to a project defined in Prism Central. + # Replace NUTANIX_PROJECT_NAME with the correct project defined in Prism Central + # Note: Project must already be present in Prism Central. + # project: + # type: name + # name: "NUTANIX_PROJECT_NAME" + # gpus: + # - type: name + # name: "GPU NAME" +``` + +## NutanixMachineTemplate spec +The table below provides an overview of the supported parameters of the `spec` attribute of a `NutanixMachineTemplate` resource. + +### Configuration parameters +| Key |Type |Description| +|------------------------------------|------|--------------------------------------------------------------------------------------------------------| +|bootType |string|Boot type of the VM. Depends on the OS image used. Allowed values: `legacy`, `uefi`. Default: `legacy` | +|vcpusPerSocket |int |Amount of vCPUs per socket. Default: `1` | +|vcpuSockets |int |Amount of vCPU sockets. Default: `2` | +|memorySize |string|Amount of Memory. Default: `4Gi` | +|systemDiskSize |string|Amount of storage assigned to the system disk. Default: `40Gi` | +|image |object|Reference (name or uuid) to the OS image used for the system disk. | +|image.type |string|Type to identify the OS image. Allowed values: `name` and `uuid` | +|image.name |string|Name of the image. | +|image.uuid |string|UUID of the image. | +|cluster |object|(Optional) Reference (name or uuid) to the Prism Element cluster. Name or UUID can be passed | +|cluster.type |string|Type to identify the Prism Element cluster. Allowed values: `name` and `uuid` | +|cluster.name |string|Name of the Prism Element cluster. | +|cluster.uuid |string|UUID of the Prism Element cluster. | +|subnets |list |(Optional) Reference (name or uuid) to the subnets to be assigned to the VMs. | +|subnets.[].type |string|Type to identify the subnet. Allowed values: `name` and `uuid` | +|subnets.[].name |string|Name of the subnet. | +|subnets.[].uuid |string|UUID of the subnet. | +|additionalCategories |list |Reference to the categories to be assigned to the VMs. These categories already exist in Prism Central. | +|additionalCategories.[].key |string|Key of the category. | +|additionalCategories.[].value |string|Value of the category. | +|project |object|Reference (name or uuid) to the project. This project must already exist in Prism Central. | +|project.type |string|Type to identify the project. Allowed values: `name` and `uuid` | +|project.name |string|Name of the project. | +|project.uuid |string|UUID of the project. | +|gpus |object|Reference (name or deviceID) to the GPUs to be assigned to the VMs. Can be vGPU or Passthrough. | +|gpus.[].type |string|Type to identify the GPU. Allowed values: `name` and `deviceID` | +|gpus.[].name |string|Name of the GPU or the vGPU profile | +|gpus.[].deviceID |string|DeviceID of the GPU or the vGPU profile | + +!!! note + The `cluster` or `subnets` configuration parameters are optional in case failure domains are defined on the `NutanixCluster` and `MachineDeployment` resources. \ No newline at end of file diff --git a/docs/capx/v1.3.x/user_requirements.md b/docs/capx/v1.3.x/user_requirements.md new file mode 100644 index 00000000..5a4b8604 --- /dev/null +++ b/docs/capx/v1.3.x/user_requirements.md @@ -0,0 +1,36 @@ +# User Requirements + +Cluster API Provider Nutanix Cloud Infrastructure (CAPX) interacts with Nutanix Prism Central (PC) APIs using a Prism Central user account. + +CAPX supports two types of PC users: + +- Local users: must be assigned the `Prism Central Admin` role. +- Domain users: must be assigned a role that at least has the [Minimum required CAPX permissions for domain users](#minimum-required-capx-permissions-for-domain-users) assigned. + +See [Credential Management](./credential_management.md){target=_blank} for more information on how to pass the user credentials to CAPX. + +## Minimum required CAPX permissions for domain users + +The following permissions are required for Prism Central domain users: + +- Create Category Mapping +- Create Image +- Create Or Update Name Category +- Create Or Update Value Category +- Create Virtual Machine +- Delete Category Mapping +- Delete Image +- Delete Name Category +- Delete Value Category +- Delete Virtual Machine +- View Category Mapping +- View Cluster +- View Image +- View Name Category +- View Project +- View Subnet +- View Value Category +- View Virtual Machine + +!!! note + The list of permissions has been validated on PC 2022.6 and above. diff --git a/docs/capx/v1.3.x/validated_integrations.md b/docs/capx/v1.3.x/validated_integrations.md new file mode 100644 index 00000000..79a98a98 --- /dev/null +++ b/docs/capx/v1.3.x/validated_integrations.md @@ -0,0 +1,59 @@ +# Validated Integrations + +Validated integrations are a defined set of specifically tested configurations between technologies that represent the most common combinations that Nutanix customers are using or deploying with CAPX. For these integrations, Nutanix has directly, or through certified partners, exercised a full range of platform tests as part of the product release process. + +## Integration Validation Policy + +Nutanix follows the version validation policies below: + +- Validate at least one active AOS LTS (long term support) version. Validated AOS LTS version for a specific CAPX version is listed in the [AOS](#aos) section.
+ + !!! note + + Typically the latest LTS release at time of CAPX release except when latest is initial release in train (eg x.y.0). Exact version depends on timing and customer adoption. + +- Validate the latest AOS STS (short term support) release at time of CAPX release. +- Validate at least one active Prism Central (PC) version. Validated PC version for a specific CAPX version is listed in the [Prism Central](#prism-central) section.
+ + !!! note + + Typically the the latest PC release at time of CAPX release except when latest is initial release in train (eg x.y.0). Exact version depends on timing and customer adoption. + +- At least one active Cluster-API (CAPI) version. Validated CAPI version for a specific CAPX version is listed in the [Cluster-API](#cluster-api) section.
+ + !!! note + + Typically the the latest Cluster-API release at time of CAPX release except when latest is initial release in train (eg x.y.0). Exact version depends on timing and customer adoption. + +## Validated versions +### Cluster-API +| CAPX | CAPI v1.1.4+ | CAPI v1.2.x | CAPI v1.3.x | CAPI v1.4.x | CAPI v1.5.x | CAPI v1.6.x | +|--------|--------------|-------------|-------------|-------------|-------------|-------------| +| v1.3.x | No | No | Yes | Yes | Yes | Yes | +| v1.2.x | No | No | Yes | Yes | Yes | No | +| v1.1.x | No | Yes | Yes | No | No | No | +| v1.0.x | Yes | Yes | No | No | No | No | +| v0.5.x | Yes | Yes | No | No | No | No | + +See the [Validated Kubernetes Versions](https://cluster-api.sigs.k8s.io/reference/versions.html?highlight=version#supported-kubernetes-versions){target=_blank} page for more information on CAPI validated versions. + +### AOS + +| CAPX | 5.20.4.5 (LTS) | 6.1.1.5 (STS) | 6.5.x (LTS) |6.6 (STS) |6.7 (STS) | +|--------|----------------|---------------|---------------|-------------|-------------| +| v1.3.x | No | No | Yes | Yes | Yes | +| v1.2.x | No | No | Yes | Yes | Yes | +| v1.1.x | No | No | Yes | No | No | +| v1.0.x | Yes | Yes | No | No | No | +| v0.5.x | Yes | Yes | No | No | No | + + +### Prism Central + +| CAPX | 2022.1.0.2 | pc.2022.6 |pc.2022.9 |pc.2023.x | +|--------|------------|-----------|----------|----------| +| v1.3.x | No | Yes |No |Yes | +| v1.2.x | No | Yes |Yes |Yes | +| v1.1.x | No | Yes |No |No | +| v1.0.x | Yes | Yes |No |No | +| v0.5.x | Yes | Yes |No |No | diff --git a/mkdocs.yml b/mkdocs.yml index b5fd77fb..058ea6f6 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -18,7 +18,30 @@ nav: - "Cloud Native": - "Overview": "index.md" - "Cluster API Provider: Nutanix (CAPX)": - - "v1.2.x (Latest)": + - "v1.3.x (Latest)": + - "Getting Started": "capx/v1.3.x/getting_started.md" + - "Types": + - "NutanixCluster": "capx/v1.3.x/types/nutanix_cluster.md" + - "NutanixMachineTemplate": "capx/v1.3.x/types/nutanix_machine_template.md" + - "Certificate Trust": "capx/v1.3.x/pc_certificates.md" + - "Credential Management": "capx/v1.3.x/credential_management.md" + - "Tasks": + - "Modifying Machine Configuration": "capx/v1.3.x/tasks/modify_machine_configuration.md" + - "CAPX v1.3.x Upgrade Procedure": "capx/v1.3.x/tasks/capx_v13x_upgrade_procedure.md" + - "Port Requirements": "capx/v1.3.x/port_requirements.md" + - "User Requirements": "capx/v1.3.x/user_requirements.md" + - "Addons": + - "CSI Driver Installation": "capx/v1.3.x/addons/install_csi_driver.md" + - "Validated Integrations": "capx/v1.3.x/validated_integrations.md" + - "Experimental": + - "Multi-PE CAPX cluster": "capx/v1.3.x/experimental/capx_multi_pe.md" + - "Autoscaler": "capx/v1.3.x/experimental/autoscaler.md" + - "OIDC Integration": "capx/v1.3.x/experimental/oidc.md" + - "Flow VPC": "capx/v1.3.x/experimental/vpc.md" + - "Proxy Configuration": "capx/v1.3.x/experimental/proxy.md" + - "Registry Mirror Configuration": "capx/v1.3.x/experimental/registry_mirror.md" + - "Troubleshooting": "capx/v1.3.x/troubleshooting.md" + - "v1.2.x": - "Getting Started": "capx/v1.2.x/getting_started.md" - "Types": - "NutanixCluster": "capx/v1.2.x/types/nutanix_cluster.md"