diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index a9fdf310a..2969aa6d7 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -23,7 +23,7 @@ Help and contributions are very welcome in the form of code contributions but al
1. If working on an issue, signal other contributors that you are actively working on it by commenting on it. Wait for approval in form of someone assigning you to the issue.
2. Fork the desired repo, develop and test your code changes.
- 1. See the [Development Guide](docs/developers/development.md) for more instructions on setting up your environment and testing changes locally.
+ 1. See the [Development Guide](/docs/caph/04-developers/01-development-guide.md) for more instructions on setting up your environment and testing changes locally.
3. Submit a pull request.
1. All code PR must be created in "draft mode". This helps other contributors by not blocking E2E tests, which cannot run in parallel. After your PR is approved, you can mark it "ready for review".
1. All code PR must be have a title starting with one of
@@ -41,4 +41,4 @@ Help and contributions are very welcome in the form of code contributions but al
All changes must be code reviewed. Coding conventions and standards are explained in the official [developer docs](https://github.com/kubernetes/community/tree/master/contributors/devel). Expect reviewers to request that you avoid common [go style mistakes](https://github.com/golang/go/wiki/CodeReviewComments) in your PRs.
-In case you want to run our E2E tests locally, please refer to [Testing](docs/developers/development.md#submitting-prs-and-testing) guide.
+In case you want to run our E2E tests locally, please refer to [Testing](/docs/caph/04-developers/01-development-guide.md#submitting-prs-and-testing) guide.
diff --git a/README.md b/README.md
index 6bee07253..ae785bd71 100644
--- a/README.md
+++ b/README.md
@@ -3,9 +3,9 @@
@@ -77,11 +77,11 @@ If you don't have a dedicated team for managing Kubernetes, you can use [Syself
Ready to dive in? Here are some resources to get you started:
-- [**Cluster API Provider Hetzner 15 Minute Tutorial**](docs/topics/quickstart.md): Set up a bootstrap cluster using Kind and deploy a Kubernetes cluster on Hetzner.
-- [**Develop and test Kubernetes clusters with Tilt**](docs/developers/development.md): Start using Tilt for rapid testing of various cluster flavors, like with/without a private network or bare metal.
-- [**Develop and test your own node-images**](docs/topics/node-image.md): Learn how to use your own machine images for production systems.
+- [**Cluster API Provider Hetzner 15 Minute Tutorial**](/docs/caph/01-getting-started/03-quickstart.md): Set up a bootstrap cluster using Kind and deploy a Kubernetes cluster on Hetzner.
+- [**Develop and test Kubernetes clusters with Tilt**](/docs/caph/04-developers/01-development-guide.md): Start using Tilt for rapid testing of various cluster flavors, like with/without a private network or bare metal.
+- [**Develop and test your own node-images**](/docs/caph/02-topics/02-node-image.md): Learn how to use your own machine images for production systems.
-In addition to the pure creation and operation of Kubernetes clusters, this provider can also validate and approve certificate signing requests. This increases security as the kubelets of the nodes can be operated with signed certificates, and enables the metrics-server to run securely. [Click here](docs/topics/advanced-caph.md#csr-controller) to read more about the CSR controller.
+In addition to the pure creation and operation of Kubernetes clusters, this provider can also validate and approve certificate signing requests. This increases security as the kubelets of the nodes can be operated with signed certificates, and enables the metrics-server to run securely. [Click here](/docs/caph/02-topics/03-advanced-caph.md#csr-controller) to read more about the CSR controller.
## 🖇️ Compatibility with Cluster API and Kubernetes Versions
@@ -121,21 +121,21 @@ Each version of Cluster API for Hetzner will attempt to support at least two Kub
> [!NOTE]
> Cluster API Provider Hetzner relies on a few prerequisites that must be already installed in the operating system images, such as a container runtime, kubelet, and Kubeadm.
>
-> Reference images are available in kubernetes-sigs/image-builder and [templates/node-image](templates/node-image).
+> Reference images are available in kubernetes-sigs/image-builder and [templates/node-image](/templates/node-image).
>
-> If pre-installation of these prerequisites isn't possible, [custom scripts can be deployed](docs/topics/node-image through the Kubeadm config.md).
+> If pre-installation of these prerequisites isn't possible, [custom scripts can be deployed](/docs/caph/02-topics/02-node-image) through the Kubeadm config.
---
## 📖 Documentation
-Documentation can be found in the `/docs` directory. [Here](docs/README.md) is an overview of our documentation.
+Documentation can be found in the `/docs` directory. [Here](/docs/README.md) is an overview of our documentation.
## 👥 Getting Involved and Contributing
We, maintainers and the community, welcome any contributions to Cluster API Provider Hetzner. For suggestions, contributions, and assistance, contact the maintainers anytime.
-To set up your environment, refer to the [development guide](docs/developers/development.md).
+To set up your environment, refer to the [development guide](/docs/caph/04-developers/01-development-guide.md).
For new contributors, check out issues tagged as [`good first issue`][good_first_issue]. These are typically smaller in scope and great for getting familiar with the codebase.
@@ -148,7 +148,7 @@ do!
## ⚖️ Code of Conduct
-Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).
+Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](/code-of-conduct.md).
## :shipit: GitHub Issues
diff --git a/docs/README.md b/docs/README.md
index c1448c2d1..6aa21df9b 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -2,27 +2,34 @@
This is the official documentation of Cluster API Provider Hetzner. Before starting with this documentation, you should have a basic understanding of Cluster API. Cluster API is a Kubernetes sub-project focused on providing a declarative API to manage Kubernetes clusters. Refer to the Cluster API quick start guide from their [official documentation](https://cluster-api.sigs.k8s.io/user/quick-start.html).
-## Quick start
+## Getting Started
-- [Preparation](topics/preparation.md)
-- [Getting started](topics/quickstart.md)
+- [Introduction](/docs/caph/01-getting-started/01-introduction.md)
+- [Preparation](/docs/caph/01-getting-started/02-preparation)
+- [Quickstart](/docs/caph/01-getting-started/03-quickstart)
## Topics
-- [Managing SSH Keys](topics/managing-ssh-keys.md)
-- [Node Images](topics/node-image.md)
-- [Production Environment](topics/production-environment.md)
-- [Advanced CAPH](topics/advanced-caph.md)
-- [Upgrading CAPH](topics/upgrade.md)
+- [Managing SSH Keys](/docs/caph/02-topics/01-managing-ssh-keys)
+- [Node Images](/docs/caph/02-topics/02-node-image)
+- [Production Environment](/docs/caph/02-topics/03-production-environment)
+- [Advanced CAPH](/docs/caph/02-topics/04-advanced-caph)
+- [Upgrading CAPH](/docs/caph/02-topics/05-upgrading-caph)
+- [Hetzner Baremetal](/docs/caph/02-topics/06-hetzner-baremetal)
## Reference
-- [General](reference/README.md)
-- [HetznerCluster](reference/hetzner-cluster.md)
-- [HCloudMachineTemplate](reference/hcloud-machine-template.md)
-- [HetznerBareMetalHost](reference/hetzner-bare-metal-host.md)
-- [HetznerBareMetalMachineTemplate](reference/hetzner-bare-metal-machine-template.md)
-- [HetznerBareMetalRemediationTemplate](reference/hetzner-bare-metal-remediation-template.md)
+
+- [General](/docs/caph/03-reference/01-introduction)
+- [HetznerCluster](/docs/caph/03-reference/02-hetzner-cluster)
+- [HCloudMachineTemplate](/docs/caph/03-reference/03-hcloud-machine-template)
+- [HCloudMachineTemplate](/docs/caph/03-reference/04-hcloud-remediation-template)
+- [HetznerBareMetalHost](/docs/caph/03-reference/05-hetzner-bare-metal-host)
+- [HetznerBareMetalMachineTemplate](/docs/caph/03-reference/06-hetzner-bare-metal-machine-template)
+- [HetznerBareMetalRemediationTemplate](/docs/caph/03-reference/07-hetzner-bare-metal-remediation-template)
+
## Development
-- [Development guide](developers/development.md)
-- [Tilt](developers/tilt.md)
-- [Releasing](developers/releasing.md)
+
+- [Development guide](/docs/caph/04-developers/01-development-guide)
+- [Tilt](/docs/caph/04-developers/02-tilt)
+- [Releasing](/docs/caph/04-developers/03-releasing)
+- [Updating Kubernetes version](/docs/caph/04-developers/04-updating-kubernetes-version)
diff --git a/docs/caph/01-getting-started/01-introduction.md b/docs/caph/01-getting-started/01-introduction.md
new file mode 100644
index 000000000..b656fb014
--- /dev/null
+++ b/docs/caph/01-getting-started/01-introduction.md
@@ -0,0 +1,46 @@
+---
+title: Introduction
+---
+
+Welcome to the official documentation for the Cluster API Provider Hetzner (CAPH).
+
+## What is the Cluster API Provider Hetzner
+
+CAPH is a Kubernetes Cluster API provider that facilitates the deployment and management of self-managed Kubernetes clusters on Hetzner infrastructure. The provider supports both cloud and bare-metal instances for consistent, scalable, and production-ready cluster operations.
+
+It is recommended that you have at least a basic understanding of Cluster API before getting started with CAPH. You can refer to the Cluster API Quick Start Guide from its [official documentation](https://cluster-api.sigs.k8s.io).
+
+## Compatibility with Cluster API and Kubernetes Versions
+
+This provider's versions are compatible with the following versions of Cluster API:
+
+| | Cluster API `v1beta1` (`v1.6.x`) | Cluster API `v1beta1` (`v1.7.x`) |
+| --------------------------------- | -------------------------------- | -------------------------------- |
+| Hetzner Provider `v1.0.0-beta.33` | ✅ | ❌ |
+| Hetzner Provider `v1.0.0-beta.34-35` | ❌ | ✅ |
+
+This provider's versions can install and manage the following versions of Kubernetes:
+
+| | Hetzner Provider `v1.0.x` |
+| ----------------- | ------------------------- |
+| Kubernetes 1.23.x | ✅ |
+| Kubernetes 1.24.x | ✅ |
+| Kubernetes 1.25.x | ✅ |
+| Kubernetes 1.26.x | ✅ |
+| Kubernetes 1.27.x | ✅ |
+| Kubernetes 1.28.x | ✅ |
+| Kubernetes 1.29.x | ✅ |
+| Kubernetes 1.30.x | ✅ |
+
+Test status:
+
+- ✅ tested
+- ❔ should work, but we weren't able to test it
+
+Each version of Cluster API for Hetzner will attempt to support at least two Kubernetes versions.
+
+{% callout %}
+
+As the versioning for this project is tied to the versioning of Cluster API, future modifications to this policy may be made to more closely align with other providers in the Cluster API ecosystem.
+
+{% /callout %}
diff --git a/docs/topics/preparation.md b/docs/caph/01-getting-started/02-preparation.md
similarity index 89%
rename from docs/topics/preparation.md
rename to docs/caph/01-getting-started/02-preparation.md
index e99f95a82..091522bad 100644
--- a/docs/topics/preparation.md
+++ b/docs/caph/01-getting-started/02-preparation.md
@@ -1,4 +1,6 @@
-# Preparation
+---
+title: Preparation
+---
## Preparation of the Hetzner Project and Credentials
@@ -8,7 +10,7 @@ There are several tasks that have to be completed before a workload cluster can
1. Create a new [HCloud project](https://console.hetzner.cloud/projects).
2. Generate an API token with read and write access. You'll find this if you click on the project and go to "security".
-3. If you want to use it, generate an SSH key, upload the public key to HCloud (also via "security"), and give it a name. Read more about [Managing SSH Keys](managing-ssh-keys.md).
+3. If you want to use it, generate an SSH key, upload the public key to HCloud (also via "security"), and give it a name. Read more about [Managing SSH Keys](/docs/caph/02-topics/01-managing-ssh-keys).
### Preparing Hetzner Robot
@@ -16,6 +18,7 @@ There are several tasks that have to be completed before a workload cluster can
2. Generate an SSH key. You can either upload it via Hetzner Robot UI or just rely on the controller to upload a key that it does not find in the robot API. This is possible, as you have to store the public and private key together with the SSH key's name in a secret that the controller reads.
---
+
## Bootstrap or Management Cluster Installation
### Common Prerequisites
@@ -33,20 +36,23 @@ It is a common practice to create a temporary, local bootstrap cluster, which is
#### 1. Existing Management Cluster.
-For production use, a “real” Kubernetes cluster should be used with appropriate backup and Disaster Recovery policies and procedures in place. The Kubernetes cluster must be at least a [supported version](../../README.md#fire-compatibility-with-cluster-api-and-kubernetes-versions).
+For production use, a “real” Kubernetes cluster should be used with appropriate backup and Disaster Recovery policies and procedures in place. The Kubernetes cluster must be at least a [supported version](https://github.com/syself/cluster-api-provider-hetzner/blob/main/README.md#%EF%B8%8F-compatibility-with-cluster-api-and-kubernetes-versions).
#### 2. Kind.
[kind](https://kind.sigs.k8s.io/) can be used for creating a local Kubernetes cluster for development environments or for the creation of a temporary bootstrap cluster used to provision a target management cluster on the selected infrastructure provider.
---
+
## Install Clusterctl and initialize Management Cluster
### Install Clusterctl
+
Please use the instructions here: https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl
or use: `make clusterctl`
### Initialize the management cluster
+
Now that we’ve got clusterctl installed and all the prerequisites are in place, we can transform the Kubernetes cluster into a management cluster by using the `clusterctl init` command. More information about clusterctl can be found [here](https://cluster-api.sigs.k8s.io/clusterctl/commands/commands.html).
For the latest version:
@@ -59,6 +65,7 @@ clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm -
or for a specific [version](https://github.com/syself/cluster-api-provider-hetzner/releases): `--infrastructure hetzner:vX.X.X`
---
+
## Variable Preparation to generate a cluster-template.
```shell
@@ -72,14 +79,14 @@ export HCLOUD_CONTROL_PLANE_MACHINE_TYPE=cpx31 \
export HCLOUD_WORKER_MACHINE_TYPE=cpx31
```
-* HCLOUD_SSH_KEY: The SSH Key name you loaded in HCloud.
-* HCLOUD_REGION: https://docs.hetzner.com/cloud/general/locations/
-* HCLOUD_IMAGE_NAME: The Image name of your operating system.
-* HCLOUD_X_MACHINE_TYPE: https://www.hetzner.com/cloud#pricing
+- HCLOUD_SSH_KEY: The SSH Key name you loaded in HCloud.
+- HCLOUD_REGION: https://docs.hetzner.com/cloud/general/locations/
+- HCLOUD_IMAGE_NAME: The Image name of your operating system.
+- HCLOUD_X_MACHINE_TYPE: https://www.hetzner.com/cloud#pricing
-For a list of all variables needed for generating a cluster manifest (from the cluster-template.yaml), use `clusterctl generate cluster --infrastructure hetzner: --list-variables hetzner-cluster`:
+For a list of all variables needed for generating a cluster manifest (from the cluster-template.yaml), use `clusterctl generate cluster --infrastructure hetzner: --list-variables hetzner-cluster`
-```
+```shell
$ clusterctl generate cluster --infrastructure hetzner: --list-variables hetzner-cluster
Required Variables:
- HCLOUD_CONTROL_PLANE_MACHINE_TYPE
@@ -96,9 +103,10 @@ Optional Variables:
### Create a secret for hcloud only
-In order for the provider integration hetzner to communicate with the Hetzner API ([HCloud API](https://docs.hetzner.cloud/), we need to create a secret with the access data. The secret must be in the same namespace as the other CRs.
+In order for the provider integration hetzner to communicate with the Hetzner API ([HCloud API](https://docs.hetzner.cloud/)), we need to create a secret with the access data. The secret must be in the same namespace as the other CRs.
`export HCLOUD_TOKEN="" `
+
- HCLOUD_TOKEN: The project where your cluster will be placed. You have to get a token from your HCloud Project.
```shell
@@ -140,4 +148,4 @@ kubectl patch secret robot-ssh -p '{"metadata":{"labels":{"clusterctl.cluster.x-
The secret name and the tokens can also be customized in the cluster template.
-See [node-image](./node-image.md) for more information.
+See [node-image](/docs/caph/02-topics/02-node-image) for more information.
diff --git a/docs/topics/quickstart.md b/docs/caph/01-getting-started/03-quickstart.md
similarity index 83%
rename from docs/topics/quickstart.md
rename to docs/caph/01-getting-started/03-quickstart.md
index d367b2755..97f146de3 100644
--- a/docs/topics/quickstart.md
+++ b/docs/caph/01-getting-started/03-quickstart.md
@@ -1,8 +1,14 @@
-# Quickstart Guide
+---
+title: Quickstart Guide
+---
This guide goes through all the necessary steps to create a cluster on Hetzner infrastructure (on HCloud).
->Note: The cluster templates used in the repository and in this guide for creating clusters are for development purposes only. These templates are not advised to be used in the production environment. However, the software is production-ready and users use it in their production environment. Make your clusters production-ready with the help of Syself Autopilot. For more information, contact .
+{% callout %}
+
+The cluster templates used in the repository and in this guide for creating clusters are for development purposes only. These templates are not advised to be used in the production environment. However, the software is production-ready and users use it in their production environment. Make your clusters production-ready with the help of Syself Autopilot. For more information, contact .
+
+{% /callout %}
## Prerequisites
@@ -37,7 +43,7 @@ There are several tasks that have to be completed before a workload cluster can
1. Create a new [HCloud project](https://console.hetzner.cloud/projects).
1. Generate an API token with read and write access. You'll find this if you click on the project and go to "security".
-1. If you want to use it, generate an SSH key, upload the public key to HCloud (also via "security"), and give it a name. Read more about [Managing SSH Keys](managing-ssh-keys.md).
+1. If you want to use it, generate an SSH key, upload the public key to HCloud (also via "security"), and give it a name. Read more about [Managing SSH Keys](/docs/caph/02-topics/01-managing-ssh-keys).
### Bootstrap or Management Cluster Installation
@@ -56,20 +62,23 @@ It is a common practice to create a temporary, local bootstrap cluster, which is
#### 1. Existing Management Cluster.
-For production use, a “real” Kubernetes cluster should be used with appropriate backup and DR policies and procedures in place. The Kubernetes cluster must be at least a [supported version](../../README.md#fire-compatibility-with-cluster-api-and-kubernetes-versions).
+For production use, a “real” Kubernetes cluster should be used with appropriate backup and DR policies and procedures in place. The Kubernetes cluster must be at least a [supported version](https://github.com/syself/cluster-api-provider-hetzner/blob/main/README.md#%EF%B8%8F-compatibility-with-cluster-api-and-kubernetes-versions).
#### 2. Kind.
[kind](https://kind.sigs.k8s.io/) can be used for creating a local Kubernetes cluster for development environments or for the creation of a temporary bootstrap cluster used to provision a target management cluster on the selected infrastructure provider.
---
+
### Install Clusterctl and initialize Management Cluster
#### Install Clusterctl
+
To install Clusterctl, refer to the instructions available in the official ClusterAPI documentation [here](https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl).
Alternatively, use the `make install-clusterctl` command to do the same.
#### Initialize the management cluster
+
Now that we’ve got clusterctl installed and all the prerequisites are in place, we can transform the Kubernetes cluster into a management cluster by using the `clusterctl init` command. More information about clusterctl can be found [here](https://cluster-api.sigs.k8s.io/clusterctl/commands/commands.html).
For the latest version:
@@ -78,9 +87,14 @@ For the latest version:
clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure hetzner
```
->Note: For a specific version, use the `--infrastructure hetzner:vX.X.X` flag with the above command.
+{% callout %}
+
+For a specific version, use the `--infrastructure hetzner:vX.X.X` flag with the above command.
+
+{% /callout %}
---
+
### Variable Preparation to generate a cluster-template
```shell
@@ -94,16 +108,17 @@ export HCLOUD_CONTROL_PLANE_MACHINE_TYPE=cpx31 \
export HCLOUD_WORKER_MACHINE_TYPE=cpx31
```
-* **HCLOUD_SSH_KEY**: The SSH Key name you loaded in HCloud.
-* **HCLOUD_REGION**: The region of the Hcloud cluster. Find the full list of regions [here](https://docs.hetzner.com/cloud/general/locations/).
-* **HCLOUD_IMAGE_NAME**: The Image name of the operating system.
-* **HCLOUD_X_MACHINE_TYPE**: The type of the Hetzner cloud server. Find more information [here](https://www.hetzner.com/cloud#pricing).
+- **HCLOUD_SSH_KEY**: The SSH Key name you loaded in HCloud.
+- **HCLOUD_REGION**: The region of the Hcloud cluster. Find the full list of regions [here](https://docs.hetzner.com/cloud/general/locations/).
+- **HCLOUD_IMAGE_NAME**: The Image name of the operating system.
+- **HCLOUD_X_MACHINE_TYPE**: The type of the Hetzner cloud server. Find more information [here](https://www.hetzner.com/cloud#pricing).
For a list of all variables needed for generating a cluster manifest (from the cluster-template.yaml), use the following command:
-````shell
+```shell
clusterctl generate cluster my-cluster --list-variables
-````
+```
+
Running the above command will give you an output in the following manner:
```shell
@@ -144,15 +159,25 @@ The secret name and the tokens can also be customized in the cluster template.
## Generating the cluster.yaml
The `clusterctl generate cluster` command returns a YAML template for creating a workload cluster.
-It generates a YAML file named `my-cluster.yaml` with a predefined list of Cluster API objects (`Cluster`, `Machines`, `MachineDeployments`, etc.) to be deployed in the current namespace.
+It generates a YAML file named `my-cluster.yaml` with a predefined list of Cluster API objects (`Cluster`, `Machines`, `MachineDeployments`, etc.) to be deployed in the current namespace.
```shell
clusterctl generate cluster my-cluster --kubernetes-version v1.29.4 --control-plane-machine-count=3 --worker-machine-count=3 > my-cluster.yaml
```
->Note: With the `--target-namespace` flag, you can specify a different target namespace.
+
+{% callout %}
+
+With the `--target-namespace` flag, you can specify a different target namespace.
+
Run the `clusterctl generate cluster --help` command for more information.
->**Note**: Please note that ready-to-use Kubernetes configurations, production-ready node images, kubeadm configuration, cluster add-ons like CNI, and similar services need to be separately prepared or acquired to ensure a comprehensive and secure Kubernetes deployment. This is where **Syself Autopilot** comes into play, taking on these challenges to offer you a seamless, worry-free Kubernetes experience. Feel free to contact us via e-mail: .
+{% /callout %}
+
+{% callout %}
+
+Please note that ready-to-use Kubernetes configurations, production-ready node images, kubeadm configuration, cluster add-ons like CNI, and similar services need to be separately prepared or acquired to ensure a comprehensive and secure Kubernetes deployment. This is where **Syself Autopilot** comes into play, taking on these challenges to offer you a seamless, worry-free Kubernetes experience. Feel free to contact us via e-mail: .
+
+{% /callout %}
## Applying the workload cluster
@@ -182,7 +207,11 @@ To verify the first control plane is up, use the following command:
kubectl get kubeadmcontrolplane
```
->Note: The control plane won’t be `ready` until we install a CNI in the next step.
+{% callout %}
+
+The control plane won’t be `ready` until we install a CNI in the next step.
+
+{% /callout %}
After the first control plane node is up and running, we can retrieve the kubeconfig of the workload cluster with:
@@ -205,7 +234,11 @@ KUBECONFIG=$CAPH_WORKER_CLUSTER_KUBECONFIG helm upgrade --install cilium cilium/
You can, of course, also install an alternative CNI, e.g., calico.
->Note: There is a bug in Ubuntu that requires the older version of Cilium for this quickstart guide.
+{% callout %}
+
+There is a bug in Ubuntu that requires the older version of Cilium for this quickstart guide.
+
+{% /callout %}
## Deploy the CCM
@@ -215,7 +248,6 @@ The following `make` command will install the CCM in your workload cluster:
`make install-ccm-in-wl-cluster PRIVATE_NETWORK=false`
-
For a cluster without a private network, use the following command:
```shell
@@ -277,7 +309,11 @@ To move the Cluster API objects from your bootstrap cluster to the new managemen
clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure hetzner
```
->Note: For a specific version, use the flag `--infrastructure hetzner:vX.X.X` with the above command.
+{% callout %}
+
+For a specific version, use the flag `--infrastructure hetzner:vX.X.X` with the above command.
+
+{% /callout %}
You can switch back to the management cluster with the following command:
@@ -293,10 +329,10 @@ clusterctl move --to-kubeconfig $CAPH_WORKER_CLUSTER_KUBECONFIG
Clusterctl Flags:
-| Flag | Description |
-| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
-| _--namespace_ | The namespace where the workload cluster is hosted. If unspecified, the current context's namespace is used. |
-| _--kubeconfig_ | Path to the kubeconfig file for the source management cluster. If unspecified, default discovery rules apply. |
+| Flag | Description |
+| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
+| _--namespace_ | The namespace where the workload cluster is hosted. If unspecified, the current context's namespace is used. |
+| _--kubeconfig_ | Path to the kubeconfig file for the source management cluster. If unspecified, default discovery rules apply. |
| _--kubeconfig-context_ | Context to be used within the kubeconfig file for the source management cluster. If empty, the current context will be used. |
-| _--to-kubeconfig_ | Path to the kubeconfig file to use for the destination management cluster. |
+| _--to-kubeconfig_ | Path to the kubeconfig file to use for the destination management cluster. |
| _--to-kubeconfig-context_ | Context to be used within the kubeconfig file for the destination management cluster. If empty, the current context will be used. |
diff --git a/docs/topics/managing-ssh-keys.md b/docs/caph/02-topics/01-managing-ssh-keys.md
similarity index 89%
rename from docs/topics/managing-ssh-keys.md
rename to docs/caph/02-topics/01-managing-ssh-keys.md
index 1458ad74f..bcbe347c2 100644
--- a/docs/topics/managing-ssh-keys.md
+++ b/docs/caph/02-topics/01-managing-ssh-keys.md
@@ -1,28 +1,38 @@
-## Managing SSH keys
+---
+title: Managing SSH keys
+---
This section provides details about SSH keys and its importance with regards to CAPH.
-### What are SSH keys?
+## What are SSH keys?
SSH keys are a crucial component of secured network communication. They provide a secure and convenient method for authenticating to and communicating with remote servers over unsecured networks. They are used as an access credential in the SSH (Secure Shell) protocol, which is used for logging in remotely from one system to another. SSH keys come in pairs with a public and a private key and its strong encryption is used for executing remote commands and remotely managing vital system components.
-### SSH keys in CAPH
+## SSH keys in CAPH
-In CAPH, SSH keys help in establishing secure communication remotely with Kubernetes nodes running on Hetzner cloud. They help you get complete access to the underlying Kubernetes nodes that are machines provisioned in Hetzner cloud and retrieve required information related to the system. With the help of these keys, you can SSH into the nodes in case of troubleshooting.
+In CAPH, SSH keys help in establishing secure communication remotely with Kubernetes nodes running on Hetzner cloud. They help you get complete access to the underlying Kubernetes nodes that are machines provisioned in Hetzner cloud and retrieve required information related to the system. With the help of these keys, you can SSH into the nodes in case of troubleshooting.
-### In Hetzner Cloud
-NOTE: You are responsible for uploading your public ssh key to hetzner cloud. This can be done using `hcloud` CLI or hetznercloud console.
-All keys that exist in Hetzner Cloud and are specified in `HetznerCluster` spec are included when provisioning machines. Therefore, they can be used to access those machines via SSH.
+## In Hetzner Cloud
-```bash
+{% callout %}
+
+You are responsible for uploading your public ssh key to hetzner cloud. This can be done using `hcloud` CLI or hetznercloud console.
+
+{% /callout %}
+
+All keys that exist in Hetzner Cloud and are specified in `HetznerCluster` spec are included when provisioning machines. Therefore, they can be used to access those machines via SSH.
+
+```shell
hcloud ssh-key create --name caph --public-key-from-file ~/.ssh/hetzner-cluster.pub
```
+
Once this is done, you'll have to reference it while creating your cluster.
For example, if you've specified four keys in your hetzner cloud project and you reference all of them while creating your cluster in `HetznerCluster.spec.sshKeys.hcloud` then you can access the machines with all the four keys.
+
```yaml
- sshKeys:
- hcloud:
+sshKeys:
+ hcloud:
- name: testing
- name: test
- name: hello
@@ -33,7 +43,8 @@ The SSH keys can be either specified cluster-wide in the `HetznerCluster.spec.ss
If one SSH key is changed in the specs of the cluster, then keep in mind that the SSH key is still valid to access all servers that have been created with it. If it is a potential security vulnerability, then all of these servers should be removed and re-created with the new SSH keys.
-### In Hetzner Robot
+## In Hetzner Robot
+
For bare metal servers, two SSH keys are required. One of them is used for the rescue system, and the other for the actual system. The two can, under the hood, of course, be the same. These SSH keys do not have to be uploaded into Robot API but have to be stored in two secrets (again, the same secret is also possible if the same reference is given twice). Not only the name of the SSH key but also the public and private key. The private key is necessary for provisioning the server with SSH. The SSH key for the actual system is specified in `HetznerBareMetalMachineTemplate` - there are no cluster-wide alternatives. The SSH key for the rescue system is defined in a cluster-wide manner in the specs of `HetznerCluster`.
The secret reference to an SSH key cannot be changed - the secret data, i.e., the SSH key, can. The host that is consumed by the `HetznerBareMetalMachine` object reacts in different ways to the change of the secret data of the secret referenced in its specs, depending on its provisioning state. If the host is already provisioned, it will emit an event warning that provisioned hosts can't change SSH keys. The corresponding machine object should instead be deleted and recreated. When the host is provisioning, it restarts this process again if a change of the SSH key makes it necessary. This depends on whether it is the SSH key for the rescue or the actual system and the exact provisioning state.
diff --git a/docs/topics/node-image.md b/docs/caph/02-topics/02-node-image.md
similarity index 93%
rename from docs/topics/node-image.md
rename to docs/caph/02-topics/02-node-image.md
index 13385c19b..56db9afaf 100644
--- a/docs/topics/node-image.md
+++ b/docs/caph/02-topics/02-node-image.md
@@ -1,4 +1,6 @@
-# Node Images
+---
+title: Node Images
+---
## What are node-images?
@@ -15,7 +17,7 @@ There are several ways to achieve this. In the quick-start guide, we use `pre-ku
For Hcloud, there is an alternative way of doing this using Packer. It creates a snapshot to boot from. This makes it easier to version the images, and creating new nodes using this image is faster. The same is possible for Hetzner BareMetal, as we could use installimage and a prepared tarball, which then gets installed as the OS for your nodes.
-To use CAPH in production, it needs a node image. In Hetzner Cloud, it is not possible to upload your own images directly. However, a server can be created, configured, and then snapshotted.
+To use CAPH in production, it needs a node image. In Hetzner Cloud, it is not possible to upload your own images directly. However, a server can be created, configured, and then snapshotted.
For this, Packer could be used, which already has support for Hetzner Cloud.
In this repository, there is also an example `Packer node-image`. To use it, do the following:
@@ -30,9 +32,9 @@ packer build --debug --on-error=abort templates/node-image/1.28.9-ubuntu-22-04-c
```
The first command is necessary so that Packer is able to create a server in hcloud.
-The second one creates the server with Packer. If you are developing your own packer image, the third command could be helpful to check what's going wrong.
+The second one creates the server with Packer. If you are developing your own packer image, the third command could be helpful to check what's going wrong.
It is essential to know that if you create your own packer image, you need to set a label so that CAPH can find the specified image name. We use for this label the following key: `caph-image-name`
-Please have a look at the image.json of the [example node-image](/templates/node-image/1.28.9-ubuntu-22-04-containerd/image.json).
+Please have a look at the image.json of the [example node-image](https://github.com/syself/cluster-api-provider-hetzner/blob/main/templates/node-image/1.28.9-ubuntu-22-04-containerd/image.json).
If you use your own node image, make sure also to use a cluster flavor that has `packer` in its name. The default one uses preKubeadm commands to install all necessary things. This is very helpful for testing but is not recommended in a production system.
diff --git a/docs/topics/production-environment.md b/docs/caph/02-topics/03-production-environment.md
similarity index 69%
rename from docs/topics/production-environment.md
rename to docs/caph/02-topics/03-production-environment.md
index 57cd71ef7..734c544f9 100644
--- a/docs/topics/production-environment.md
+++ b/docs/caph/02-topics/03-production-environment.md
@@ -1,10 +1,14 @@
-# Production Environment Best Practices
+---
+title: Production Environment Best Practices
+---
## HA Cluster API Components
+
The clusterctl CLI will create all four needed components, such as Cluster API (CAPI), cluster-api-bootstrap-provider-kubeadm (CAPBK), cluster-api-control-plane-kubeadm (KCP), and cluster-api-provider-hetzner (CAPH).
-It uses the respective *-components.yaml from the releases. However, these are not highly available. By scaling the components, we can at least reduce the probability of failure. If this is not enough, add anti-affinity rules and PDBs.
+It uses the respective \*-components.yaml from the releases. However, these are not highly available. By scaling the components, we can at least reduce the probability of failure. If this is not enough, add anti-affinity rules and PDBs.
Scale up the deployments
+
```shell
kubectl -n capi-system scale deployment capi-controller-manager --replicas=2
diff --git a/docs/topics/advanced-caph.md b/docs/caph/02-topics/04-advanced-caph.md
similarity index 85%
rename from docs/topics/advanced-caph.md
rename to docs/caph/02-topics/04-advanced-caph.md
index 78a9e66b7..084268a00 100644
--- a/docs/topics/advanced-caph.md
+++ b/docs/caph/02-topics/04-advanced-caph.md
@@ -1,20 +1,24 @@
-# Advanced CAPH
+---
+title: Advanced CAPH
+---
## CSR Controller
-For the secure operation of Kubernetes, it is necessary to sign the kubelet serving certificates. By default, these are self-signed by kubeadm. By using the kubelet flag `rotate-server-certificates: "true"`, which can be found in initConfiguration/joinConfiguration.nodeRegistration.kubeletExtraArgs, the kubelet will do a certificate signing request (CSR) to the certificates API of Kubernetes.
+For the secure operation of Kubernetes, it is necessary to sign the kubelet serving certificates. By default, these are self-signed by kubeadm. By using the kubelet flag `rotate-server-certificates: "true"`, which can be found in initConfiguration/joinConfiguration.nodeRegistration.kubeletExtraArgs, the kubelet will do a certificate signing request (CSR) to the certificates API of Kubernetes.
These CSRs are not approved by default for security reasons. As described in the docs, this should be done manually by the cloud provider or with a custom approval controller. Since the provider integration is the responsible cloud provider in a way, it makes sense to implement such a controller directly here. The CSR controller that we implemented checks the DNS name and the IP address and thus ensures that only those nodes receive the signed certificate that are supposed to.
-For error-free operation, the following kubelet flags should not be set:
-```
+For error-free operation, the following kubelet flags should not be set:
+
+```shell
tls-cert-file: "/var/lib/kubelet/pki/kubelet-client-current.pem"
-tls-private-key-file: "/var/lib/kubelet/pki/kubelet-client-current.pem"
+tls-private-key-file: "/var/lib/kubelet/pki/kubelet-client-current.pem"
```
-For more information, see:
-* https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/
-* https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#client-and-serving-certificates
+For more information, see:
+
+- [https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)
+- [https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#client-and-serving-certificates](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#client-and-serving-certificates)
## Rate Limits
@@ -28,7 +32,7 @@ We support multi-tenancy. You can start multiple clusters in one Hetzner project
Cluster API allows to [configure Machine Health Checks](https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/healthchecking.html) with custom remediation strategies. This is helpful for our bare metal servers. If the health checks give an outcome that one server cannot be reached, the default strategy would be to delete it. In that case, it would need to be provisioned again. This takes, of course, longer for bare metal servers than for virtual cloud servers. Therefore, we want to try to avoid this with the help of our `HetznerBareMetalRemediationController` and `HCloudRemediationController`. Instead of deleting the object and deprovisioning it, we first try to reboot it and see whether this helps. If it solves the problem, we save a lot of time that is required for re-provisioning it.
-If the MHC is configured to be used with the `HetznerBareMetalRemediationTemplate` (also see the [reference of the object](/docs/reference/hetzner-bare-metal-remediation-template.md)) and `HCloudRemediationTemplate` (also see the [reference of the object](/docs/reference/hcloud-remediation-template.md)), then such an object is created every time the MHC finds an unhealthy machine.
+If the MHC is configured to be used with the `HetznerBareMetalRemediationTemplate` (also see the [reference of the object](/docs/caph/03-reference/07-hetzner-bare-metal-remediation-template)) and `HCloudRemediationTemplate` (also see the [reference of the object](/docs/caph/03-reference/04-hcloud-remediation-template)), then such an object is created every time the MHC finds an unhealthy machine.
The `HetznerBareMetalRemediationController` reconciles this object and then sets an annotation in the relevant `HetznerBareMetalHost` object specifying the desired remediation strategy. At the moment, only "reboot" is supported.
The `HCloudRemediationController` reboots the HCloudMachine directly via the HCloud API. For HCloud servers, there is no other strategy than "reboot" either.
@@ -70,5 +74,4 @@ spec:
type: "Reboot"
retryLimit: 2
timeout: 300s
-
```
diff --git a/docs/topics/upgrade.md b/docs/caph/02-topics/05-upgrading-caph.md
similarity index 76%
rename from docs/topics/upgrade.md
rename to docs/caph/02-topics/05-upgrading-caph.md
index ef45fedec..07f121e47 100644
--- a/docs/topics/upgrade.md
+++ b/docs/caph/02-topics/05-upgrading-caph.md
@@ -1,4 +1,6 @@
-# Upgrading the Kubernetes Cluster API Provider Hetzner
+---
+title: Upgrading the Kubernetes Cluster API Provider Hetzner
+---
This guide explains how to upgrade Cluster API and Cluster API Provider Hetzner (aka CAPH). Additionally, it also references [upgrading your kubernetes version](#external-cluster-api-reference) as part of this guide.
@@ -8,8 +10,8 @@ Connect `kubectl` to the management cluster.
Check, that you are connected to the correct cluster:
-```
-❯ k config current-context
+```shell
+❯ k config current-context
mgm-cluster-admin@mgm-cluster
```
@@ -19,7 +21,7 @@ OK, looks good.
Is clusterctl still up to date?
-```
+```shell
$ clusterctl version -oyaml
clusterctl:
buildDate: "2024-04-09T17:23:12Z"
@@ -35,7 +37,7 @@ clusterctl:
You can see the current version here:
-https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl
+[https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl](https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl)
If your clusterctl is outdated, then upgrade it. See the above URL for details.
@@ -43,7 +45,7 @@ If your clusterctl is outdated, then upgrade it. See the above URL for details.
Have a look at what could get upgraded:
-```
+```shell
$ clusterctl upgrade plan
Checking cert-manager version...
Cert-Manager is already up to date
@@ -67,14 +69,18 @@ Docs: [clusterctl upgrade plan](https://cluster-api.sigs.k8s.io/clusterctl/comma
You might be surprised that for `infrastructure-hetzner`, you see the "Already up to date" message below "NEXT VERSION".
-NOTE: `clusterctl upgrade plan` does not display pre-release versions by default.
+{% callout %}
+
+`clusterctl upgrade plan` does not display pre-release versions by default.
+
+{% /callout %}
## Upgrade cluster-API
We will upgrade cluster API core components to v1.6.3 version.
Use the command, which you saw in the plan:
-```bash
+```shell
$ clusterctl upgrade apply --contract v1beta1
Checking cert-manager version...
Cert-manager is already up to date
@@ -89,9 +95,14 @@ Installing Provider="bootstrap-kubeadm" Version="v1.6.3" TargetNamespace="capi-k
Deleting Provider="control-plane-kubeadm" Version="v1.6.0" Namespace="capi-kubeadm-control-plane-system"
Installing Provider="control-plane-kubeadm" Version="v1.6.3" TargetNamespace="capi-kubeadm-control-plane-system"
```
-Great, cluster-API was upgraded.
-NOTE: If you want to update only one components or update components one by one then there are flags for that under `clusterctl upgrade apply` subcommand like `--bootstrap`, `--control-plane` and `--core`.
+Great, cluster-API was upgraded.
+
+{% callout %}
+
+If you want to update only one components or update components one by one then there are flags for that under `clusterctl upgrade apply` subcommand like `--bootstrap`, `--control-plane` and `--core`.
+
+{% /callout %}
## Upgrade CAPH
@@ -99,7 +110,7 @@ You can find the latest version of CAPH here:
https://github.com/syself/cluster-api-provider-hetzner/tags
-```bash
+```shell
$ clusterctl upgrade apply --infrastructure=hetzner:v1.0.0-beta.33
Checking cert-manager version...
Cert-manager is already up to date
@@ -110,13 +121,18 @@ Installing Provider="infrastructure-hetzner" Version="v1.0.0-beta.33" TargetName
```
After the upgrade, you'll notice the new pod spinning up the `caph-system` namespace.
-```bash
+
+```shell
$ kubectl get pods -n caph-system
NAME READY STATUS RESTARTS AGE
caph-controller-manager-85fcb6ffcb-4sj6d 1/1 Running 0 79s
```
-NOTE: Please note that `clusterctl` doesn't support pre-release of GitHub by default so if you want to use a pre-release, you'll have to specify the version such as `hetzner:v1.0.0-beta.33`
+{% callout %}
+
+Please note that `clusterctl` doesn't support pre-release of GitHub by default so if you want to use a pre-release, you'll have to specify the version such as `hetzner:v1.0.0-beta.33`
+
+{% /callout %}
## Check your cluster
@@ -124,14 +140,15 @@ Check the health of your workload cluster with your preferred tools and ensure t
## External Cluster API Reference
-After upgrading cluster API, you may want to update the Kubernetes version of your controlplane and worker nodes. Those details can be found in the [Cluster API documentation](https://cluster-api.sigs.k8s.io/tasks/upgrading-clusters).
+After upgrading cluster API, you may want to update the Kubernetes version of your controlplane and worker nodes. Those details can be found in the [Cluster API documentation](https://cluster-api.sigs.k8s.io/tasks/upgrading-clusters).
+
+{% callout %}
+
+The update can be done on either management cluster or workload cluster separately as well.
-NOTE: The update can be done on either management cluster or workload cluster separately as well.
+{% /callout %}
-You should upgrade your kubernetes version after considering the following:
+You should upgrade your kubernetes version after considering that a Cluster API minor release supports (when it’s initially created):
-````markdown
-A Cluster API minor release supports (when it’s initially created):
- - 4 Kubernetes minor releases for the management cluster (N - N-3)
- - 6 Kubernetes minor releases for the workload cluster (N - N-5)
-````
+- 4 Kubernetes minor releases for the management cluster (N - N-3)
+- 6 Kubernetes minor releases for the workload cluster (N - N-5)
diff --git a/docs/topics/hetzner-baremetal.md b/docs/caph/02-topics/06-hetzner-baremetal.md
similarity index 76%
rename from docs/topics/hetzner-baremetal.md
rename to docs/caph/02-topics/06-hetzner-baremetal.md
index ae39652b9..2b8cfc291 100644
--- a/docs/topics/hetzner-baremetal.md
+++ b/docs/caph/02-topics/06-hetzner-baremetal.md
@@ -1,41 +1,49 @@
+---
+title: Hetzner Baremetal
+---
-### Hetzner Baremetal
Hetzner have two offerings primarily:
-1. Hetzner Cloud/ Hcloud -> for virtualized servers
-2. Hetzner Dedicated/ Robot -> for bare metal servers
-In this guide, we will focus on creating a cluster from baremetal servers.
+1. `Hetzner Cloud`/`Hcloud` for virtualized servers
+2. `Hetzner Dedicated`/`Robot` for bare metal servers
-### Flavors of Hetzner Baremetal
-Now, there are different ways you can use baremetal servers, you can use them as controlplanes or as worker nodes or both. Based on that we have created some templates and those templates are released as flavors in GitHub releases.
+In this guide, we will focus on creating a cluster from baremetal servers.
+
+## Flavors of Hetzner Baremetal
+
+Now, there are different ways you can use baremetal servers, you can use them as controlplanes or as worker nodes or both. Based on that we have created some templates and those templates are released as flavors in GitHub releases.
These flavors can be consumed using [clusterctl](https://main.cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl) tool:
- To use bare metal servers for your deployment, you can choose one of the following flavors:
+To use bare metal servers for your deployment, you can choose one of the following flavors:
+
+| Flavor | What it does |
+| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
+| `hetzner-baremetal-control-planes-remediation` | Uses bare metal servers for the control plane nodes - with custom remediation (try to reboot machines first) |
+| `hetzner-baremetal-control-planes` | Uses bare metal servers for the control plane nodes - with normal remediation (unprovision/recreate machines) |
+| `hetzner-hcloud-control-planes` | Uses the hcloud servers for the control plane nodes and the bare metal servers for the worker nodes |
+
+{% callout %}
-| Flavor | What it does |
-| -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
-| hetzner-baremetal-control-planes-remediation | Uses bare metal servers for the control plane nodes - with custom remediation (try to reboot machines first) |
-| hetzner-baremetal-control-planes | Uses bare metal servers for the control plane nodes - with normal remediation (unprovision/recreate machines) |
-| hetzner-hcloud-control-planes | Uses the hcloud servers for the control plane nodes and the bare metal servers for the worker nodes |
+These flavors are only for demonstration purposes and should not be used in production.
+{% /callout %}
-NOTE: These flavors are only for demonstration purposes and should not be used in production.
+## Purchasing Bare Metal Servers
-### Purchasing Bare Metal Servers
+If you want to create a cluster with bare metal servers, you will also need to set up the robot credentials. For setting robot credentials, as described in the [reference](/docs/caph/03-reference/06-hetzner-bare-metal-machine-template), you need to purchase bare metal servers beforehand manually.
-If you want to create a cluster with bare metal servers, you will also need to set up the robot credentials. For setting robot credentials, as described in the [reference](/docs/reference/hetzner-bare-metal-machine-template.md), you need to purchase bare metal servers beforehand manually.
+## Creating a bootstrap cluster
-### Creating a bootstrap cluster
In this guide, we will focus on creating a bootstrap cluster which is basically a local management cluster created using [kind](https://kind.sigs.k8s.io).
-To create a bootstrap cluster, you can use the following command:
+To create a bootstrap cluster, you can use the following command:
-```bash
-kind create cluster
+```shell
+kind create cluster
```
-```bash
+```shell
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.29.2) 🖼
✓ Preparing nodes 📦
@@ -51,9 +59,9 @@ kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
```
-After creating the bootstrap cluster, it is also required to have some variables exported and the name of the variables that needs to be exported can be known by running the following command:
+After creating the bootstrap cluster, it is also required to have some variables exported and the name of the variables that needs to be exported can be known by running the following command:
-```bash
+```shell
$ clusterctl generate cluster my-cluster --list-variables --flavor hetzner-hcloud-control-planes
Required Variables:
- HCLOUD_CONTROL_PLANE_MACHINE_TYPE
@@ -70,12 +78,13 @@ Optional Variables:
These variables are used during the deployment of Hetzner infrastructure provider in the cluster.
-Installing the Hetzner provider can be done using the following command:
-```bash
+Installing the Hetzner provider can be done using the following command:
+
+```shell
clusterctl init --infrastructure hetzner
-````
+```
-```bash
+```shell
Fetching providers
Installing cert-manager Version="v1.14.2"
Waiting for cert-manager to be available...
@@ -91,55 +100,68 @@ You can now create your first workload cluster by running the following:
clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
```
-### Generating Workload Cluster Manifest
+## Generating Workload Cluster Manifest
Once the infrastructure provider is ready, we can create a workload cluster manifest using `clusterctl generate`
-```bash
+```shell
clusterctl generate cluster my-cluster --flavor hetzner-hcloud-control-planes > my-cluster.yaml
```
As of now, our cluster manifest lives in `my-cluster.yaml` file and we will apply this at a later stage after preparing secrets and ssh-keys.
-### Preparing Hetzner Robot
+## Preparing Hetzner Robot
1. Create a new web service user. [Here](https://robot.your-server.de/preferences/index), you can define a password and copy your user name.
2. Generate an SSH key. You can either upload it via Hetzner Robot UI or just rely on the controller to upload a key that it does not find in the robot API. You have to store the public and private key together with the SSH key's name in a secret that the controller reads.
For this tutorial, we will let the controller upload keys to hetzner robot.
-#### Creating new user in Robot
+### Creating new user in Robot
+
To create new user in Robot, click on the `Create User` button in the Hetzner Robot console. Once you create the new user, a user ID will be provided to you via email from Hetzner Robot. The password will be the same that you used while creating the user.
-![robot user](../pics/robot_user.png)
+![robot user](https://syself.com/images/robot-user.png)
This is a required for following the next step.
-### creating and verify ssh-key in hcloud
-First you need to create a ssh-key locally and you can `ssh-keygen` command for creation.
-```bash
+## Creating and verify ssh-key in hcloud
+
+First you need to create a ssh-key locally and you can `ssh-keygen` command for creation.
+
+```shell
ssh-keygen -t ed25519 -f ~/.ssh/caph
```
+
Above command will create a public and private key in your `~/.ssh` directory.
You can use the public key `~/.ssh/caph.pub` and upload it to your hcloud project. Go to your project and under `Security` -> `SSH Keys` click on `Add SSH key` and add your public key there and in the `Name` of ssh key you'll use the name `test`.
-NOTE: There is also a helper CLI called [hcloud](https://github.com/hetznercloud/cli) that can be used for the purpose of uploading the SSH key.
+{% callout %}
+
+There is also a helper CLI called [hcloud](https://github.com/hetznercloud/cli) that can be used for the purpose of uploading the SSH key.
+
+{% /callout %}
-In the above step, the name of the ssh-key that is recognized by hcloud is `test`. This is important because we will reference the name of the ssh-key later.
+In the above step, the name of the ssh-key that is recognized by hcloud is `test`. This is important because we will reference the name of the ssh-key later.
This is an important step because the same ssh key is used to access the servers. Make sure you are using the correct ssh key name.
-The `test` is the name of the ssh key that we have created above. It is because the generated manifest references `test` as the ssh key name.
+The `test` is the name of the ssh key that we have created above. It is because the generated manifest references `test` as the ssh key name.
+
```yaml
- sshKeys:
- hcloud:
+sshKeys:
+ hcloud:
- name: test
```
-NOTE: If you want to use some other name then you can modify it accordingly.
+{% callout %}
+
+If you want to use some other name then you can modify it accordingly.
+
+{% /callout %}
-### Create Secrets In Management Cluster (Hcloud + Robot)
+## Create Secrets In Management Cluster (Hcloud + Robot)
In order for the provider integration hetzner to communicate with the Hetzner API ([HCloud API](https://docs.hetzner.cloud/) + [Robot API](https://robot.your-server.de/doc/webservice/en.html#preface)), we need to create secrets with the access data. The secret must be in the same namespace as the other CRs.
@@ -154,11 +176,11 @@ export HETZNER_SSH_PUB_PATH="" \
export HETZNER_SSH_PRIV_PATH=""
```
-- HCLOUD_TOKEN: The project where your cluster will be placed. You have to get a token from your HCloud Project.
-- HETZNER_ROBOT_USER: The User you have defined in Robot under settings/web.
-- HETZNER_ROBOT_PASSWORD: The Robot Password you have set in Robot under settings/web.
-- HETZNER_SSH_PUB_PATH: The Path to your generated Public SSH Key.
-- HETZNER_SSH_PRIV_PATH: The Path to your generated Private SSH Key. This is needed because CAPH uses this key to provision the node in Hetzner Dedicated.
+- `HCLOUD_TOKEN`: The project where your cluster will be placed. You have to get a token from your HCloud Project.
+- `HETZNER_ROBOT_USER`: The User you have defined in Robot under settings/web.
+- `HETZNER_ROBOT_PASSWORD`: The Robot Password you have set in Robot under settings/web.
+- `HETZNER_SSH_PUB_PATH`: The Path to your generated Public SSH Key.
+- `HETZNER_SSH_PRIV_PATH`: The Path to your generated Private SSH Key. This is needed because CAPH uses this key to provision the node in Hetzner Dedicated.
```shell
kubectl create secret generic hetzner --from-literal=hcloud=$HCLOUD_TOKEN --from-literal=robot-user=$HETZNER_ROBOT_USER --from-literal=robot-password=$HETZNER_ROBOT_PASSWORD
@@ -166,7 +188,11 @@ kubectl create secret generic hetzner --from-literal=hcloud=$HCLOUD_TOKEN --from
kubectl create secret generic robot-ssh --from-literal=sshkey-name=test --from-file=ssh-privatekey=$HETZNER_SSH_PRIV_PATH --from-file=ssh-publickey=$HETZNER_SSH_PUB_PATH
```
-> NOTE: sshkey-name should must match the name that is present in hetzner otherwise the controller will not know how to reach the machine.
+{% callout %}
+
+`sshkey-name` should must match the name that is present in hetzner otherwise the controller will not know how to reach the machine.
+
+{% /callout %}
Patch the created secrets so that they get automatically moved to the target cluster later. The following command helps you do that:
@@ -177,9 +203,7 @@ kubectl patch secret robot-ssh -p '{"metadata":{"labels":{"clusterctl.cluster.x-
The secret name and the tokens can also be customized in the cluster template.
-
-### Creating Host Object In Management Cluster
-
+## Creating Host Object In Management Cluster
For using baremetal servers as nodes, you need to create a `HetznerBareMetalHost` object for each bare metal server that you bought and specify its server ID in the specs. Below is a sample manifest for HetznerBareMetalHost object.
@@ -197,7 +221,7 @@ spec:
maintenanceMode: false
```
-If you already know the WWN of the storage device you want to choose for booting, specify it in the `rootDeviceHints` of the object. If not, you can proceed. During the provisioning process, the controller will fetch information about all available storage devices and store it in the status of the object.
+If you already know the WWN of the storage device you want to choose for booting, specify it in the `rootDeviceHints` of the object. If not, you can proceed. During the provisioning process, the controller will fetch information about all available storage devices and store it in the status of the object.
For example, let's consider a `HetznerBareMetalHost` object without specify it's WWN.
@@ -212,10 +236,12 @@ spec:
serverID: # please check robot console
maintenanceMode: false
```
+
In the above server, we have not specified the WWN of the server and we have applied it in the cluster.
-After a while, you will see that there is an error in provisioning of `HetznerBareMetalHost` object that you just applied above. The error will look the following:
-```bash
+After a while, you will see that there is an error in provisioning of `HetznerBareMetalHost` object that you just applied above. The error will look the following:
+
+```shell
$ kubectl get hetznerbaremetalhost -A
default my-cluster-md-1-tgvl5 my-cluster default/test-bm-gpu my-cluster-md-1-t9znj-694hs Provisioning 23m ValidationFailed no root device hints specified
```
@@ -224,26 +250,27 @@ After you see the error, get the YAML output of the `HetznerBareMetalHost` objec
```yaml
storage:
-- hctl: "2:0:0:0"
- model: Micron_1100_MTFDDAK512TBN
- name: sda
- serialNumber: 18081BB48B25
- sizeBytes: 512110190592
- sizeGB: 512
- vendor: 'ATA '
- wwn: "0x500a07511bb48b25"
-- hctl: "1:0:0:0"
- model: Micron_1100_MTFDDAK512TBN
- name: sdb
- serialNumber: 18081BB48992
- sizeBytes: 512110190592
- sizeGB: 512
- vendor: 'ATA '
- wwn: "0x500a07511bb48992"
+ - hctl: "2:0:0:0"
+ model: Micron_1100_MTFDDAK512TBN
+ name: sda
+ serialNumber: 18081BB48B25
+ sizeBytes: 512110190592
+ sizeGB: 512
+ vendor: "ATA "
+ wwn: "0x500a07511bb48b25"
+ - hctl: "1:0:0:0"
+ model: Micron_1100_MTFDDAK512TBN
+ name: sdb
+ serialNumber: 18081BB48992
+ sizeBytes: 512110190592
+ sizeGB: 512
+ vendor: "ATA "
+ wwn: "0x500a07511bb48992"
```
-In the output above, we can see that on this baremetal servers we have two disk with their respective `Wwn`. We can also verify it by making an ssh connection to the rescue system and executing the following command:
-```bash
+In the output above, we can see that on this baremetal servers we have two disk with their respective `Wwn`. We can also verify it by making an ssh connection to the rescue system and executing the following command:
+
+```shell
# lsblk --nodeps --output name,type,wwn
NAME TYPE WWN
sda disk 0x500a07511bb48992
@@ -252,23 +279,37 @@ sdb disk 0x500a07511bb48b25
Since, we are now confirmed about wwn of the two disks, we can use either of them. We will use `kubectl edit` and update the following information in the `HetznerBareMetalHost` object.
-NOTE: Defining `rootDeviceHints` on your baremetal server is important otherwise the baremetal server will not be able join the cluster.
+{% callout %}
+
+Defining `rootDeviceHints` on your baremetal server is important otherwise the baremetal server will not be able join the cluster.
+
+{% /callout %}
```yaml
rootDeviceHints:
wwn: "0x500a07511bb48992"
```
-NOTE: If you've more than one disk then it's recommended to use smaller disk for OS installation so that we can retain the data in between provisioning of machine.
+{% callout %}
+
+If you've more than one disk then it's recommended to use smaller disk for OS installation so that we can retain the data in between provisioning of machine.
+
+{% /callout %}
We will apply this file in the cluster and the provisioning of the machine will be successful.
-To summarize, if you don't know the WWN of your server then there are two ways to find it out:
+To summarize, if you don't know the WWN of your server then there are two ways to find it out:
+
1. Create the HetznerBareMetalHost without WWN and wait for the controller to fetch all information about the available storage devices. Afterwards, look at status of `HetznerBareMetalHost` by running `kubectl get hetznerbaremetalhost -o yaml` in your management cluster. There you will find `hardwareDetails` of all of your bare metal hosts, in which you can see a list of all the relevant storage devices as well as their properties. You can copy+paste the WWN of your desired storage device into the `rootDeviceHints` of your `HetznerBareMetalHost` objects.
2. SSH into the rescue system of the server and use `lsblk --nodeps --output name,type,wwn`
-NOTE: There might be cases where you've more than one disk.
-```bash
+{% callout %}
+
+There might be cases where you've more than one disk.
+
+{% /callout %}
+
+```shell
lsblk -d -o name,type,wwn,size
NAME TYPE WWN SIZE
sda disk 238.5G
@@ -277,19 +318,23 @@ sdc disk 1.8T
sdd disk 1.8T
```
-In the above case, you can use any of the four disks available to you on a baremetal server.
+In the above case, you can use any of the four disks available to you on a baremetal server.
+## Creating Workload Cluster
-### Creating Workload Cluster
+{% callout %}
-NOTE: Secrets as of now are hardcoded given we are using a flavor which is essentially a template. If you want to use your own naming convention for secrets then you'll have to update the templates. Please make sure that you pay attention to the sshkey name.
+Secrets as of now are hardcoded given we are using a flavor which is essentially a template. If you want to use your own naming convention for secrets then you'll have to update the templates. Please make sure that you pay attention to the sshkey name.
+
+{% /callout %}
Since we have already created secret in hetzner robot, hcloud and ssh-keys as secret in management cluster, we can now apply the cluster.
-```bash
+
+```shell
kubectl apply -f my-cluster.yaml
```
-```bash
+```shell
$ kubectl apply -f my-cluster.yaml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/my-cluster-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/my-cluster-md-1 created
@@ -308,23 +353,32 @@ hetznerbaremetalmachinetemplate.infrastructure.cluster.x-k8s.io/my-cluster-md-1
hetznercluster.infrastructure.cluster.x-k8s.io/my-cluster created
```
-### Getting the kubeconfig of workload cluster
-After a while, our first controlplane should be up and running. You can verify it using the output of `kubectl get kcp` followed by `kubectl get machines`
+## Getting the kubeconfig of workload cluster
+
+After a while, our first controlplane should be up and running. You can verify it using the output of `kubectl get kcp` followed by `kubectl get machines`
-Once it's up and running, you can get the kubeconfig of the workload cluster using the following command:
+Once it's up and running, you can get the kubeconfig of the workload cluster using the following command:
-```bash
+```shell
clusterctl get kubeconfig my-cluster > workload-kubeconfig
chmod go-r workload-kubeconfig # required to avoid helm warning
```
-### Deploy Cluster Addons
+## Deploy Cluster Addons
+
+{% callout %}
+
+This is important for the functioning of the cluster otherwise the cluster won't work.
+
+{% /callout %}
+
+### Deploying the Hetzner Cloud Controller Manager
-NOTE: This is important for the functioning of the cluster otherwise the cluster won't work.
+{% callout %}
-#### Deploying the Hetzner Cloud Controller Manager
+This requires a secret containing access credentials to both Hetzner Robot and HCloud.
-> This requires a secret containing access credentials to both Hetzner Robot and HCloud.
+{% /callout %}
If you have configured your secret correctly in the previous step then you already have the secret in your cluster.
Let's deploy the hetzner CCM helm chart.
@@ -346,9 +400,11 @@ REVISION: 1
TEST SUITE: None
```
-#### Installing CNI
+### Installing CNI
+
For CNI, let's deploy cilium in the workload cluster that will facilitate the networking in the cluster.
-```bash
+
+```shell
$ helm install cilium cilium/cilium --version 1.15.3 --kubeconfig workload-kubeconfig
NAME: cilium
LAST DEPLOYED: Thu Apr 4 21:11:13 2024
@@ -364,9 +420,11 @@ Your release version is 1.15.3.
For any further help, visit https://docs.cilium.io/en/v1.15/gettinghelp
```
-### verifying the cluster
+### Verifying the cluster
+
Now, the cluster should be up and you can verify it by running the following commands:
-```bash
+
+```shell
$ kubectl get clusters -A
NAMESPACE NAME CLUSTERCLASS PHASE AGE VERSION
default my-cluster Provisioned 10h
@@ -381,16 +439,17 @@ default my-cluster-md-0-2xgj5-tl2jr my-cluster my-cluster-md-0-59cgw
default my-cluster-md-1-cp2fd-7nld7 my-cluster bm-my-cluster-md-1-d7526 hcloud://bm-2317525 Running 9h v1.29.4
default my-cluster-md-1-cp2fd-n74sm my-cluster bm-my-cluster-md-1-l5dnr hcloud://bm-2105469 Running 10h v1.29.4
```
+
Please note that hcloud servers are prefixed with `hcloud://` and baremetal servers are prefixed with `hcloud://bm-`.
## Advanced
### Constant hostnames for bare metal servers
-In some cases it has advantages to fix the hostname and with it the names of nodes in your clusters. For cloud servers not so much as for bare metal servers, where there are storage integrations that allow you to use the storage of the bare metal servers and that work with fixed node names.
+In some cases it has advantages to fix the hostname and with it the names of nodes in your clusters. For cloud servers not so much as for bare metal servers, where there are storage integrations that allow you to use the storage of the bare metal servers and that work with fixed node names.
Therefore, there is the possibility to create a cluster that uses fixed node names for bare metal servers. Please note: this only applies to the bare metal servers and not to Hetzner Cloud servers.
-You can trigger this feature by creating a `Cluster` or `HetznerBareMetalMachine` (you can choose) with the annotation `"capi.syself.com/constant-bare-metal-hostname": "true"`. Of course, `HetznerBareMetalMachines` are not created by the user. However, if you use the `ClusterClass`, then you can add the annotation to a `MachineDeployment`, so that all machines are created with this annotation.
+You can trigger this feature by creating a `Cluster` or `HetznerBareMetalMachine` (you can choose) with the annotation `"capi.syself.com/constant-bare-metal-hostname": "true"`. Of course, `HetznerBareMetalMachines` are not created by the user. However, if you use the `ClusterClass`, then you can add the annotation to a `MachineDeployment`, so that all machines are created with this annotation.
-This is still an experimental feature but it should be safe to use and to also update existing clusters with this annotation. All new machines will be created with this constant hostname.
\ No newline at end of file
+This is still an experimental feature but it should be safe to use and to also update existing clusters with this annotation. All new machines will be created with this constant hostname.
diff --git a/docs/caph/03-reference/01-introduction.md b/docs/caph/03-reference/01-introduction.md
new file mode 100644
index 000000000..ffffe6de5
--- /dev/null
+++ b/docs/caph/03-reference/01-introduction.md
@@ -0,0 +1,18 @@
+---
+title: Object Reference
+---
+
+In this object reference, we introduce all objects that are specific for this provider integration. The naming of objects, servers, machines, etc. can be confusing. Without claiming to be consistent throughout these docs, we would like to give an overview of how we name things here.
+
+First, there are some important counterparts of our objects and CAPI objects.
+
+- `HetznerCluster` has CAPI's `Cluster` object.
+- CAPI's `Machine` object is the counterpart of both `HCloudMachine` and `HetznerBareMetalMachine`.
+
+These two are objects of the provider integration that are reconciled by the `HCloudMachineController` and the `HetznerBareMetalMachineController` respectively.
+
+The `HCloudMachineController` checks whether there is a server in the HCloud API already and if not, buys/creates one that corresponds to a `HCloudMachine` object.
+
+The `HetznerBareMetalMachineController` does not buy new bare metal machines, but instead consumes a host of the inventory of `HetznerBareMetalHosts`, which have a one-to-one relationship to Hetzner dedicated/root/bare metal servers that have been bought manually by the user.
+
+Therefore, there is an important difference between the `HCloudMachine` object and a server in the HCloud API. For bare metal, we have even three terms: the `HetznerBareMetalMachine` object, the `HetznerBareMetalHost` object, and the actual bare metal server that can be accessed through Hetzner's robot API.
diff --git a/docs/caph/03-reference/02-hetzner-cluster.md b/docs/caph/03-reference/02-hetzner-cluster.md
new file mode 100644
index 000000000..b1e19a80d
--- /dev/null
+++ b/docs/caph/03-reference/02-hetzner-cluster.md
@@ -0,0 +1,70 @@
+---
+title: HetznerCluster
+---
+
+In HetznerCluster you can define everything related to the general components of the cluster as well as those properties, which are valid cluster-wide.
+
+There are two different modes for the cluster. A pure HCloud cluster and a cluster that uses Hetzner dedicated (bare metal) servers, either as control planes or as workers.
+
+The HCloud cluster works with Kubeadm and supports private networks.
+
+In a cluster that includes bare metal servers there are no private networks, as this feature has not yet been integrated in cluster-api-provider-hetzner. Apart from SSH, the node image has to support cloud-init, which we use to provision the bare metal machines.
+
+{% callout %}
+
+In clusters with bare metal servers, you need to use [this CCM](https://github.com/syself/hetzner-cloud-controller-manager), as the official one does not support bare metal.
+
+{% /callout %}
+
+[Here](/docs/caph/02-topics/01-managing-ssh-keys) you can find more information regarding the handling of SSH keys. Some of them are specified in `HetznerCluster` to have them cluster-wide, others are machine-scoped.
+
+## Usage without HCloud Load Balancer
+
+It is also possible not to use the cloud load balancer from Hetzner. This is useful for setups with only one control plane, or if you have your own cloud load balancer.
+
+Using `controlPlaneLoadBalancer.enabled=false` prevents the creation of a hcloud load balancer. Then you need to configure `controlPlaneEndpoint.port=6443` & `controlPlaneEndpoint.host`, which should be a domain that has A records configured pointing to the control plane IP for example.
+
+If you are using your own load balancer, you need to point towards it and configure the load balancer to target the control planes of the cluster.
+
+## Overview of HetznerCluster.Spec
+
+| Key | Type | Default | Required | Description |
+| -------------------------------------------------------- | ---------- | ---------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
+| `hcloudNetwork` | `object` | | no | Specifies details about Hetzner cloud private networks |
+| `hcloudNetwork.enabled` | `bool` | | yes | States whether network should be enabled or disabled |
+| `hcloudNetwork.cidrBlock` | `string` | `"10.0.0.0/16"` | no | Defines the CIDR block |
+| `hcloudNetwork.subnetCidrBlock` | `string` | `"10.0.0.0/24"` | no | Defines the CIDR block of the subnet. Note that one subnet ist required |
+| `hcloudNetwork.networkZone` | `string` | `"eu-central"` | no | Defines the network zone. Must be eu-central, us-east or us-west |
+| `controlPlaneRegions` | `[]string` | `[]string{fsn1}` | no | This is the base for the failureDomains of the cluster |
+| `sshKeys` | `object` | | no | Cluster-wide SSH keys that serve as default for machines as well |
+| `sshKeys.hcloud` | `[]object` | | no | SSH keys for hcloud |
+| `sshKeys.hcloud.name` | `string` | | yes | Name of SSH key |
+| `sshKeys.hcloud.fingerprint` | `string` | | no | Fingerprint of SSH key - used by the controller |
+| `sshKeys.robotRescueSecretRef` | `object` | | no | Reference to the secret where the SSH key for the rescue system is stored |
+| `sshKeys.robotRescueSecretRef.name` | `string` | | yes | Name of the secret |
+| `sshKeys.robotRescueSecretRef.key` | `object` | | yes | Details about the keys used in the data of the secret |
+| `sshKeys.robotRescueSecretRef.key.name` | `string` | | yes | Name is the key in the secret's data where the SSH key's name is stored |
+| `sshKeys.robotRescueSecretRef.key.publicKey` | `string` | | yes | PublicKey is the key in the secret's data where the SSH key's public key is stored |
+| `sshKeys.robotRescueSecretRef.key.privateKey` | `string` | | yes | PrivateKey is the key in the secret's data where the SSH key's private key is stored |
+| `controlPlaneEndpoint` | `object` | | no | Set by the controller. It is the endpoint to communicate with the control plane |
+| `controlPlaneEndpoint.host` | `string` | | yes | Defines host |
+| `controlPlaneEndpoint.port` | `int`32 | | yes | Defines port |
+| `controlPlaneLoadBalancer` | `object` | | yes | Defines specs of load balancer |
+| `controlPlaneLoadBalancer.enabled` | `bool` | `true` | no | Specifies if a load balancer should be created |
+| `controlPlaneLoadBalancer.name` | `string` | | no | Name of load balancer |
+| `controlPlaneLoadBalancer.algorithm` | `string` | `round_robin` | no | Type of load balancer algorithm. Either round_robin or least_connections |
+| `controlPlaneLoadBalancer.type` | `string` | `lb11` | no | Type of load balancer. One of lb11, lb21, lb31 |
+| `controlPlaneLoadBalancer.port` | `int` | `6443` | no | Load balancer port. Must be in range 1-65535 |
+| `controlPlaneLoadBalancer.extraServices` | `[]object` | | no | Defines extra services of load balancer |
+| `controlPlaneLoadBalancer.extraServices.protocol` | `string` | | yes | Defines protocol. Must be one of https, http, or tcp |
+| `controlPlaneLoadBalancer.extraServices.listenPort` | `int` | | yes | Defines listen port. Must be in range 1-65535 |
+| `controlPlaneLoadBalancer.extraServices.destinationPort` | `int` | | yes | Defines destination port. Must be in range 1-65535 |
+| `hcloudPlacementGroup` | `[]object` | | no | List of placement groups that should be defined in Hetzner API |
+| `hcloudPlacementGroup.name` | `string` | | yes | Name of placement group |
+| `hcloudPlacementGroup.type` | `string` | `type` | no | Type of placement group. Hetzner only supports 'spread' |
+| `hetznerSecret` | `object` | | yes | Reference to secret where Hetzner API credentials are stored |
+| `hetznerSecret.name` | `string` | | yes | Name of secret |
+| `hetznerSecret.key` | `object` | | yes | Reference to the keys that are used in the secret, either `hcloudToken` or `hetznerRobotUser` and `hetznerRobotPassword` need to be specified |
+| `hetznerSecret.key.hcloudToken` | `string` | | no | Name of the key where the token for the Hetzner Cloud API is stored |
+| `hetznerSecret.key.hetznerRobotUser` | `string` | | no | Name of the key where the username for the Hetzner Robot API is stored |
+| `hetznerSecret.key.hetznerRobotPassword` | `string` | | no | Name of the key where the password for the Hetzner Robot API is stored |
diff --git a/docs/caph/03-reference/03-hcloud-machine-template.md b/docs/caph/03-reference/03-hcloud-machine-template.md
new file mode 100644
index 000000000..ad6913a4c
--- /dev/null
+++ b/docs/caph/03-reference/03-hcloud-machine-template.md
@@ -0,0 +1,21 @@
+---
+title: HCloudMachineTemplate
+---
+
+In `HCloudMachineTemplate` you can define all important properties for `HCloudMachines`. `HCloudMachines` are reconciled by the `HCloudMachineController`, which creates and deletes servers in Hetzner Cloud.
+
+### Overview of HCloudMachineTemplate.Spec
+
+| Key | Type | Default | Required | Description |
+| ------------------------------------------ | ---------- | --------------------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `template.spec.providerID` | `string` | | no | ProviderID set by controller |
+| `template.spec.type` | `string` | | yes | Desired server type of server in Hetzner's Cloud API. Example: cpx11 |
+| `template.spec.imageName` | `string` | | yes | Specifies desired image of server. ImageName can reference an image uploaded to Hetzner API in two ways: either directly as name of an image, or as label of an image (see [here](/docs/caph/02-topics/02-node-image) for more details) |
+| `template.spec.sshKeys` | `object` | | no | SSHKeys that are scoped to this machine |
+| `template.spec.sshKeys.hcloud` | `[]object` | | no | SSH keys for HCloud |
+| `template.spec.sshKeys.hcloud.name` | `string` | | yes | Name of SSH key |
+| `template.spec.sshKeys.hcloud.fingerprint` | `string` | | no | Fingerprint of SSH key - used by the controller |
+| `template.spec.placementGroupName` | `string` | | no | Placement group of the machine in HCloud API, must be referencing an existing placement group |
+| `template.spec.publicNetwork` | `object` | `{enableIPv4: true, enabledIPv6: true}` | no | Specs about primary IP address of server. If both IPv4 and IPv6 are disabled, then the private network has to be enabled |
+| `template.spec.publicNetwork.enableIPv4` | `bool` | `true` | no | Defines whether server has IPv4 address enabled. As Hetzner load balancers require an IPv4 address, this setting will be ignored and set to true if there is no private net. |
+| `template.spec.publicNetwork.enableIPv6` | `bool` | `true` | no | Defines whether server has IPv6 address enabled |
diff --git a/docs/caph/03-reference/04-hcloud-remediation-template.md b/docs/caph/03-reference/04-hcloud-remediation-template.md
new file mode 100644
index 000000000..2e37f88a3
--- /dev/null
+++ b/docs/caph/03-reference/04-hcloud-remediation-template.md
@@ -0,0 +1,12 @@
+---
+title: HCloudRemediationTemplate
+---
+
+### Overview of HCloudMachineTemplate.Spec
+
+| Key | Type | Default | Required | Description |
+| ------------------------------------------ | ---------- | --------------------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `template.spec.strategy` | `object` | | no | Strategy field defines remediation strategy |
+| `template.spec.strategy.retryLimit` | `integer` | | no | RetryLimit sets the maximum number of remediation retries. Zero retries if not set |
+| `template.spec.strategy.timeout` | `string` | | yes | Timeout sets the timeout between remediation retries. It should be of the form "10m", or "40s" |
+| `template.spec.strategy.types` | `string` | | no | Type represents the type of the remediation strategy. At the moment, only "Reboot" is supported |
diff --git a/docs/reference/hetzner-bare-metal-host.md b/docs/caph/03-reference/05-hetzner-bare-metal-host.md
similarity index 61%
rename from docs/reference/hetzner-bare-metal-host.md
rename to docs/caph/03-reference/05-hetzner-bare-metal-host.md
index eb6c34727..4f2f5943b 100644
--- a/docs/reference/hetzner-bare-metal-host.md
+++ b/docs/caph/03-reference/05-hetzner-bare-metal-host.md
@@ -1,10 +1,12 @@
-## HetznerBareMetalHost
+---
+title: HetznerBareMetalHost
+---
The `HetznerBareMetalHost` object has a one-to-one relationship to a Hetzner dedicated server. Its ID is specified in the specs. The host object does not belong to a certain `HetznerCluster`, but can be used by multiple clusters. This is useful, as one host object per server is enough and you can easily see whether a host is used by one of your clusters or not.
There are not many properties that are relevant to the host object. The WWN of the storage device that should be used for provisioning has to be specified in `rootDeviceHints` - but not right from the start. This property can be updated after the host starts the provisioning phase and writes all `hardwareDetails` in the host's status. From there, you can copy the WWN of the storage device that suits your needs and add it to your `HetznerBareMetalHost` object.
-#### Find the WWN
+## Find the WWN
After you have started the provisioning, run the following on your management cluster to find the `hardwareDetails` of all of your bare metal hosts.
@@ -12,7 +14,7 @@ After you have started the provisioning, run the following on your management cl
kubectl describe hetznerbaremetalhost
```
-### Lifecycle of a HetznerBareMetalHost
+## Lifecycle of a HetznerBareMetalHost
A host object is available for consumption right after it has been created. When a `HetznerBareMetalMachine` chooses the host, it updates the host's status. This triggers the provisioning of the host. When the `HetznerBareMetalMachine` gets deleted, then the host deprovisions and returns to the state where it is available for new consumers.
@@ -20,25 +22,25 @@ A host object is available for consumption right after it has been created. When
Host objects cannot be updated and have to be deleted and re-created if some of the properties change.
-#### Maintenance mode
+### Maintenance mode
Maintenance mode means that the host will not be consumed by any `HetznerBareMetalMachine`. If it is already consumed, then the corresponding `HetznerBareMetalMachine` will be deleted and the `HetznerBareMetalHost` deprovisioned.
-### Overview of HetznerBareMetalHost.Spec
+## Overview of HetznerBareMetalHost.Spec
-| Key | Type | Default | Required | Description |
-| ------------------------ | --------- | ------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| serverID | int | | yes | Server ID of the Hetzner dedicated server, you can find it on your Hetzner robot dashboard |
-| rootDeviceHints | object | | no | It is important to find the correct root device. If none are specified, the host will stop provisioning in between to wait for the details to be specified. HardwareDetails in the host's status can be used to find the correct device. Currently, you can specify one disk or a raid setup |
-| rootDeviceHints.wwn | string | | no | Unique storage identifier for non raid setups |
-| rootDeviceHints.raid | object | | no | Used to provide the controller with information on which disks a raid can be established |
-| rootDeviceHints.raid.wwn | []string | | no | Defines a list of Unique storage identifiers used for raid setups |
-| consumerRef | object | | no | Used by the controller and references the bare metal machine that consumes this host |
-| maintenanceMode | bool | | no | If set to true, the host deprovisions and will not be consumed by any bare metal machine |
-| description | string | | no | Description can be used to store some valuable information about this host |
-| status | object | | no | The controller writes this status. As there are some that cannot be regenerated during any reconcilement, the status is in the specs of the object - not the actual status. DO NOT EDIT!!! |
+| Key | Type | Default | Required | Description |
+| -------------------------- | ---------- | ------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `serverID` | `int` | | yes | Server ID of the Hetzner dedicated server, you can find it on your Hetzner robot dashboard |
+| `rootDeviceHints` | `object` | | no | It is important to find the correct root device. If none are specified, the host will stop provisioning in between to wait for the details to be specified. HardwareDetails in the host's status can be used to find the correct device. Currently, you can specify one disk or a raid setup |
+| `rootDeviceHints.wwn` | `string` | | no | Unique storage identifier for non raid setups |
+| `rootDeviceHints.raid` | `object` | | no | Used to provide the controller with information on which disks a raid can be established |
+| `rootDeviceHints.raid.wwn` | `[]string` | | no | Defines a list of Unique storage identifiers used for raid setups |
+| `consumerRef` | `object` | | no | Used by the controller and references the bare metal machine that consumes this host |
+| `maintenanceMode` | `bool` | | no | If set to true, the host deprovisions and will not be consumed by any bare metal machine |
+| `description` | `string` | | no | Description can be used to store some valuable information about this host |
+| `status` | `object` | | no | The controller writes this status. As there are some that cannot be regenerated during any reconcilement, the status is in the specs of the object - not the actual status. DO NOT EDIT!!! |
-### Example of the HetznerBareMetalHost object
+## Example of the HetznerBareMetalHost object
You should create one of these objects for each of your bare metal servers that you want to use for your deployment.
@@ -54,6 +56,7 @@ spec:
maintenanceMode: false
description: Test Machine 0 #example
```
+
If you want to create an object that will be used in a raid setup, the following can serve as an example.
```yaml
diff --git a/docs/caph/03-reference/06-hetzner-bare-metal-machine-template.md b/docs/caph/03-reference/06-hetzner-bare-metal-machine-template.md
new file mode 100644
index 000000000..dff501214
--- /dev/null
+++ b/docs/caph/03-reference/06-hetzner-bare-metal-machine-template.md
@@ -0,0 +1,140 @@
+---
+title: HetznerBareMetalMachineTemplate
+---
+
+In `HetznerBareMetalMachineTemplate` you can define all important properties for the `HetznerBareMetalMachines`. `HetznerBareMetalMachines` are reconciled by the `HetznerBareMetalMachineController`, which DOES NOT create or delete Hetzner dedicated machines. Instead, it uses the inventory of `HetznerBareMetalHosts`. These hosts correspond to already existing bare metal servers, which get provisioned when selected by a `HetznerBareMetalMachine`.
+
+## Lifecycle of a HetznerBareMetalMachine
+
+### Creating a HetznerBareMetalMachine
+
+Simply put, the specs of a `HetznerBareMetalMachine` consist of two parts. First, there is information about how the bare metal server is supposed to be provisioned. Second, there are properties where you can specify which host to select. If these selectors correspond to a host that is not consumed yet, then the `HetznerBareMetalMachine` transfers important information to the host object. This information is used to provision the host according to what you specified in the specs of `HetznerBareMetalMachineTemplate`. If a host has provisioned successfully, then the `HetznerBareMetalMachine` is considered to be ready.
+
+### Deleting of a HetznerBareMetalMachine
+
+When the `HetznerBareMetalMachine` object gets deleted, it removes the information from the host that the latter used for provisioning. The host then triggers the deprovisioning. As soon as this has been completed, the `HetznerBareMetalMachineController` removes the owner and consumer reference of the host and deletes the finalizer of the machine, so that it can be finally deleted.
+
+### Updating a HetznerBareMetalMachine
+
+Updating a `HetznerBareMetalMachineTemplate` is not possible. Instead, a new template should be created.
+
+## cloud-init and installimage
+
+Both in [installimage](https://docs.hetzner.com/robot/dedicated-server/operating-systems/installimage/) and cloud-init the ports used for SSH can be changed, e.g. with the following code snippet:
+
+```shell
+sed -i -e '/^\(#\|\)Port/s/^.*$/Port 2223/' /etc/ssh/sshd_config
+```
+
+As the controller needs to know this to be able to successfully provision the server, these ports can be specified in `SSHSpec` of `HetznerBareMetalMachineTemplate`.
+
+When the port is changed in cloud-init, then we additionally need to use the following command to make sure that the change of ports takes immediate effect:
+`systemctl restart sshd`
+
+## Choosing the right host
+
+Via MatchLabels you can specify a certain label (key and value) that identifies the host. You get more flexibility with MatchExpressions. This allows decisions like "take any host that has the key "mykey" and let this key have either one of the values "val1", "val2", and "val3".
+
+## Overview of HetznerBareMetalMachineTemplate.Spec
+
+| Key | Type | Default | Required | Description |
+| ---------------------------------------------------------------- | --------------------- | ------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `template.spec.providerID` | `string` | | no | Provider ID set by controller |
+| `template.spec.installImage` | `object` | | yes | Configuration used in autosetup |
+| `template.spec.installImage.image` | `object` | | yes | Defines image for bm machine. See below for details. |
+| `template.spec.installImage.image.url` | `string` | | no | Remote URL of image. Can be tar, tar.gz, tar.bz, tar.bz2, tar.xz, tgz, tbz, txz |
+| `template.spec.installImage.image.name` | `string` | | no | Name of the image |
+| `template.spec.installImage.image.path` | `string` | | no | Local path of a pre-installed image |
+| `template.spec.installImage.postInstallScript` | `string` | | no | PostInstallScript that is used for commands that will be executed after installing image |
+| `template.spec.installImage.swraid` | `int` | `0` | no | Enables or disables raid. Set 1 to enable |
+| `template.spec.installImage.swraidLevel` | `int` | `1` | no | Defines the software raid levels. Only relevant if raid is enabled. Pick one of 0,1,5,6,10 |
+| `template.spec.installImage.partitions` | `[]object` | | yes | Partitions that should be created in installimage |
+| `template.spec.installImage.partitions.mount` | `string` | | yes | Mount defines the mount path of the filesystem |
+| `template.spec.installImage.partitions.fileSystem` | `string` | | yes | Filesystem that should be used. Can be ext2, ext3, ext4, btrfs, reiserfs, xfs, swap, or the name of the LVM volume group, if the partition is a VG |
+| `template.spec.installImage.partitions.size` | `string` | | yes | Size of the partition. Use 'all' to use all remaining space of the drive. M/G/T can be used as unit specifications for MiB, GiB, TiB |
+| `template.spec.installImage.logicalVolumeDefinitions` | `[]object` | | no | Defines the logical volume definitions that should be created |
+| `template.spec.installImage.logicalVolumeDefinitions.vg` | `string` | | yes | Defines the vg name |
+| `template.spec.installImage.logicalVolumeDefinitions.name` | `string` | | yes | Defines the volume name |
+| `template.spec.installImage.logicalVolumeDefinitions.mount` | `string` | | yes | Defines the mount path |
+| `template.spec.installImage.logicalVolumeDefinitions.fileSystem` | `string` | | yes | Defines the file system |
+| `template.spec.installImage.logicalVolumeDefinitions.size` | `string` | | yes | Defines size with unit M/G/T or MiB/GiB/TiB |
+| `template.spec.installImage.btrfsDefinitions` | `[]object` | | no | Defines the btrfs sub-volume definitions that should be created |
+| `template.spec.installImage.btrfsDefinitions.volume` | `string` | | yes | Defines the btrfs volume name |
+| `template.spec.installImage.btrfsDefinitions.subvolume` | `string` | | yes | Defines the btrfs sub-volume name |
+| `template.spec.installImage.btrfsDefinitions.mount` | `string` | | yes | Defines the btrfs mount path |
+| `template.spec.hostSelector` | `object` | | no | Options to select hosts with |
+| `template.spec.hostSelector.matchLabels` | `map[string][string]` | | no | Specify labels as key-value pairs that should be there in host object to select it |
+| `template.spec.hostSelector.matchExpressions` | `[]object` | | no | Requirements using Kubernetes MatchExpressions |
+| `template.spec.hostSelector.matchExpressions.key` | `string` | | yes | Key of label that should be matched in host object |
+| `template.spec.hostSelector.matchExpressions.operator` | `string` | | yes | [Selection operator](https://pkg.go.dev/k8s.io/apimachinery@v0.23.4/pkg/selection?utm_source=gopls#Operator) |
+| `template.spec.hostSelector.matchExpressions.values` | `[]string` | | yes | Values whose relation to the label value in the host machine is defined by the selection operator |
+| `template.spec.sshSpec` | `object` | | yes | SSH specs |
+| `template.spec.sshSpec.secretRef` | `object` | | yes | Reference to the secret where SSH key is stored |
+| `template.spec.sshSpec.secretRef.name` | `string` | | yes | Name of the secret |
+| `template.spec.sshSpec.secretRef.key` | `object` | | yes | Details about the keys used in the data of the secret |
+| `template.spec.sshSpec.secretRef.key.name` | `string` | | yes | Name is the key in the secret's data where the SSH key's name is stored |
+| `template.spec.sshSpec.secretRef.key.publicKey` | `string` | | yes | PublicKey is the key in the secret's data where the SSH key's public key is stored |
+| `template.spec.sshSpec.secretRef.key.privateKey` | `string` | | yes | PrivateKey is the key in the secret's data where the SSH key's private key is stored |
+| `template.spec.sshSpec.portAfterInstallImage` | `int` | `22` | no | PortAfterInstallImage specifies the port that can be used to reach the server via SSH after install image completed successfully |
+| `template.spec.sshSpec.portAfterCloudInit` | `int` | `22` (install image port) | no | PortAfterCloudInit specifies the port that can be used to reach the server via SSH after cloud init completed successfully |
+
+## installImage.image
+
+You must specify either name and url, or a local path.
+
+Example of an image provided by Hetzner via NFS:
+
+```yaml
+image:
+ path: /root/.oldroot/nfs//images/Ubuntu-2204-jammy-amd64-base.tar.gz
+```
+
+Example of an image provided by you via https. The script installimage of Hetzner parses the name to detect the version. It is
+recommended to follow their naming pattern.
+
+```yaml
+image:
+ name: Ubuntu-2204-jammy-amd64-custom
+ url: https://user:pwd@example.com/images/Ubuntu-2204-jammy-amd64-custom.tar.gz
+```
+
+Example of pulling an image from an oci-registry:
+
+```yaml
+image:
+ name: Ubuntu-2204-jammy-amd64-custom
+ url: oci://ghcr.io/myorg/images/Ubuntu-2204-jammy-amd64-custom:1.0.0-beta.2
+```
+
+If you need credentials to pull the image, then provide the environment variable `OCI_REGISTRY_AUTH_TOKEN` to the controller.
+
+You can provide the variable via a secret of the deployment `caph-controller-manager`:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ # ...
+spec:
+ # ...
+ template:
+ spec:
+ containers:
+ - command:
+ - /manager
+ image: ghcr.io/syself/caph:vXXX
+ env:
+ - name: OCI_REGISTRY_AUTH_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: my-oci-registry-secret # The name of the secret
+ key: OCI_REGISTRY_AUTH_TOKEN # The key in the secret. Format: "user:pwd" or just "token"
+ # ... other container specs
+```
+
+You can push an image to an oci-registry with a tool like [oras](https://oras.land):
+
+```shell
+oras push ghcr.io/myorg/images/Ubuntu-2204-jammy-amd64-custom:1.0.0-beta.2 \
+ --artifact-type application/vnd.myorg.machine-image.v1 Ubuntu-2204-jammy-amd64-custom.tar.gz
+```
diff --git a/docs/caph/03-reference/07-hetzner-bare-metal-remediation-template.md b/docs/caph/03-reference/07-hetzner-bare-metal-remediation-template.md
new file mode 100644
index 000000000..6c2e98593
--- /dev/null
+++ b/docs/caph/03-reference/07-hetzner-bare-metal-remediation-template.md
@@ -0,0 +1,14 @@
+---
+title: HetznerBareMetalRemediationTemplate
+---
+
+In `HetznerBareMetalRemediationTemplate` you can define all important properties for `HetznerBareMetalRemediations`. With this remediation, you can define a custom method for the manner of how Machine Health Checks treat the unhealthy `object`s - `HetznerBareMetalMachines` in this case. For more information about how to use remediations, see [Advanced CAPH](/docs/caph/02-topics/04-advanced-caph). `HetznerBareMetalRemediations` are reconciled by the `HetznerBareMetalRemediationController`, which reconciles the remediatons and triggers the requested type of remediation on the relevant `HetznerBareMetalMachine`.
+
+## Overview of HetznerBareMetalRemediationTemplate.Spec
+
+| Key | Type | Default | Required | Description |
+| ----------------------------------- | -------- | --------- | -------- | --------------------------------------------------------------------------- |
+| `template.spec.strategy` | `object` | | yes | Remediation strategy to be applied |
+| `template.spec.strategy.type` | `string` | `Reboot` | no | Type of the remediation strategy. At the moment, only "Reboot" is supported |
+| `template.spec.strategy.retryLimit` | `int` | `0` | no | Set maximum of remediation retries. Zero retries if not set. |
+| `template.spec.strategy.timeout` | `string` | | yes | Timeout of one remediation try. Should be of the form "10m", or "40s" |
diff --git a/docs/developers/development.md b/docs/caph/04-developers/01-development-guide.md
similarity index 80%
rename from docs/developers/development.md
rename to docs/caph/04-developers/01-development-guide.md
index a80daa58d..3b15b5619 100644
--- a/docs/developers/development.md
+++ b/docs/caph/04-developers/01-development-guide.md
@@ -1,19 +1,23 @@
-# Developing Cluster API Provider Hetzner
+---
+title: Developing Cluster API Provider Hetzner
+---
-Developing our provider is quite easy. Please follow the steps mentioned below:
-1. You need to install some base requirements.
-2. You need to follow the [preparation document](/docs/topics/preparation.md) to set up everything related to Hetzner.
+Developing our provider is quite easy. Please follow the steps mentioned below:
+
+1. You need to install some base requirements.
+2. You need to follow the [preparation document](/docs/caph/01-getting-started/02-preparation) to set up everything related to Hetzner.
## Install Base requirements
To develop with Tilt, there are a few requirements. You can use the command `make all-tools` to check whether the versions of the tools are up to date and to install the ones that are missing.
This ensures the following:
+
- clusterctl
- ctlptl (required)
- go (required)
- helm (required)
-- helmfile
+- helmfile
- kind (required)
- kubectl (required)
- packer
@@ -22,32 +26,35 @@ This ensures the following:
## Preparing Hetzner project
-For more information, please see [here](/docs/topics/preparation.md).
+For more information, please see [here](/docs/caph/01-getting-started/02-preparation).
## Setting Tilt up
You need to create a `.envrc` file and specify the values you need. After the `.envrc` is loaded, invoke `direnv allow` to load the environment variables in your current shell session.
-The complete reference can be found [here](/docs/developers/tilt.md).
+The complete reference can be found [here](/docs/caph/04-developers/02-tilt).
+
## Developing with Tilt
-
-
-
+![tilt](https://syself.com/images/tilt.png)
Provider Integration development requires a lot of iteration, and the “build, tag, push, update deployment” workflow can be very tedious. Tilt makes this process much simpler by watching for updates and automatically building and deploying them. To build a kind cluster and to start Tilt, run:
```shell
make tilt-up
```
-> To access the Tilt UI, please go to: `http://localhost:10351`
+{% callout %}
+
+To access the Tilt UI, please go to: `http://localhost:10351`
+
+{% /callout %}
Once your kind management cluster is up and running, you can deploy a workload cluster. This could be done through the Tilt UI by pressing one of the buttons in the top right corner, e.g., **"Create Workload Cluster - without Packer"**. This triggers the `make create-workload-cluster` command, which uses the environment variables (we defined in the .envrc) and the cluster-template. Additionally, it installs cilium as CNI.
If you update the API in some way, you need to run `make generate` to generate everything related to kubebuilder and the CRDs.
-To tear down the workload cluster, press the **"Delete Workload Cluster"** button. After a few minutes, the resources should be deleted.
+To tear down the workload cluster, press the **"Delete Workload Cluster"** button. After a few minutes, the resources should be deleted.
To tear down the kind cluster, use:
@@ -57,23 +64,25 @@ $ make delete-mgt-cluster
To delete the registry, use `make delete-registry`. Use `make delete-mgt-cluster-registry` to delete both management cluster and associated registry.
-If you have any trouble finding the right command, you can run the `make help` command to get a list of all available make targets.
+If you have any trouble finding the right command, you can run the `make help` command to get a list of all available make targets.
## Submitting PRs and testing
-Pull requests and issues are highly encouraged! For more information, please have a look at the [Contribution Guidelines](../../CONTRIBUTING.md)
+Pull requests and issues are highly encouraged! For more information, please have a look at the [Contribution Guidelines](https://github.com/syself/cluster-api-provider-hetzner/blob/main/CONTRIBUTING.md)
There are two important commands that you should make use of before creating the PR.
-With `make verify`, you can run all linting checks and others. Make sure that all of these checks pass - otherwise, the PR cannot be merged. Note that you need to commit all changes for the last checks to pass.
+With `make verify`, you can run all linting checks and others. Make sure that all of these checks pass - otherwise, the PR cannot be merged. Note that you need to commit all changes for the last checks to pass.
With `make test`, all unit tests are triggered. If they fail out of nowhere, then please re-run them. They are not 100% stable and sometimes there are tests failing due to something related to Kubernetes' `envtest`.
With `make generate`, new CRDs are generated. This is necessary if you change the API.
-### Running local e2e test
+
+## Running local e2e test
If you are interested in running the E2E tests locally, then you can use the following commands:
-```
+
+```shell
export HCLOUD_TOKEN=
export CAPH_LATEST_VERSION=
export HETZNER_ROBOT_USER=
@@ -84,11 +93,13 @@ make test-e2e
```
For the SSH public and private keys, you should use the following command to encode the keys. Note that the E2E test will not work if the ssh key is in any other format!
-```
+
+```shell
export HETZNER_SSH_PRIV=$(cat ~/.ssh/cluster | base64 -w0)
```
-### Creating new user in Robot
+## Creating new user in Robot
+
To create new user in Robot, click on the `Create User` button in the Hetzner Robot console. Once you create the new user, a user ID will be provided to you via email from Hetzner Robot. The password will be the same that you used while creating the user.
-![robot user](../pics/robot_user.png)
+![robot user](https://syself.com/images/robot-user.png)
diff --git a/docs/caph/04-developers/02-tilt.md b/docs/caph/04-developers/02-tilt.md
new file mode 100644
index 000000000..6a4de387e
--- /dev/null
+++ b/docs/caph/04-developers/02-tilt.md
@@ -0,0 +1,41 @@
+---
+title: Reference of Tilt
+---
+
+```
+"allowed_contexts": [
+ "kind-caph",
+],
+"deploy_cert_manager": True,
+"deploy_observability": False,
+"preload_images_for_kind": True,
+"kind_cluster_name": "caph",
+"capi_version": "v1.7.3",
+"cabpt_version": "v0.5.5",
+"cacppt_version": "v0.4.10",
+"cert_manager_version": "v1.11.0",
+"kustomize_substitutions": {
+ "HCLOUD_REGION": "fsn1",
+ "CONTROL_PLANE_MACHINE_COUNT": "3",
+ "WORKER_MACHINE_COUNT": "3",
+ "KUBERNETES_VERSION": "v1.29.4",
+ "HCLOUD_IMAGE_NAME": "test-image",
+ "HCLOUD_CONTROL_PLANE_MACHINE_TYPE": "cpx31",
+ "HCLOUD_WORKER_MACHINE_TYPE": "cpx31",
+ "CLUSTER_NAME": "test",
+ "HETZNER_SSH_PUB_PATH": "~/.ssh/test",
+ "HETZNER_SSH_PRIV_PATH": "~/.ssh/test",
+ "HETZNER_ROBOT_USER": "test",
+ "HETZNER_ROBOT_PASSWORD": "pw"
+}
+```
+
+| Key | Type | Default | Required | Description |
+| ---------------------------------------------------------------------------------------------- | ---------- | --------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `allowed_contexts` | `[]string` | `["kind-caph"]` | no | A list of kubeconfig contexts Tilt is allowed to use. See the Tilt documentation on [allow_k8s_contexts](https://docs.tilt.dev/api.html#api.allow_k8s_contexts) for more details |
+| `deploy_cert_manager` | `bool` | `true` | no | If true, deploys cert-manager into the cluster for use for webhook registration |
+| `deploy_observability` | `bool` | `false` | no | If true, installs grafana, loki and promtail in the dev cluster. Grafana UI will be accessible via a link in the tilt console. Important! This feature requires the `helm` command to be available in the user's path |
+| `preload_images_for_kind` | `bool` | `true` | no | If set to true, uses `kind load docker-image` to preload images into a kind cluster |
+| `kind_cluster_name` | `[]object` | `"caph"` | no | The name of the kind cluster to use when preloading images |
+| `capi_version` | `string` | `"v1.7.3"` | no | Version of CAPI |
+| `cert_manager_version` | `string` | `"v1.11.0"` | no | Version of cert manager |
diff --git a/docs/developers/releasing.md b/docs/caph/04-developers/03-releasing.md
similarity index 83%
rename from docs/developers/releasing.md
rename to docs/caph/04-developers/03-releasing.md
index a7bed8c73..eecc0ac6b 100644
--- a/docs/developers/releasing.md
+++ b/docs/caph/04-developers/03-releasing.md
@@ -1,4 +1,6 @@
-# Release Process
+---
+title: Release Process
+---
## Create a tag
@@ -9,7 +11,11 @@
- `export RELEASE_TAG=` (eg. `export RELEASE_TAG=v1.0.1`)
- `git tag -a ${RELEASE_TAG} -m ${RELEASE_TAG}`
2. Push the tag to the GitHub repository.
- > NOTE: `origin` should be the name of the remote pointing to `github.com/syself/cluster-api-provider-hetzner`
+ {% callout %}
+
+ `origin` should be the name of the remote pointing to `github.com/syself/cluster-api-provider-hetzner`
+
+ {% /callout %}
- `git push origin ${RELEASE_TAG}`
- This will automatically trigger a [Github Action](https://github.com/syself/cluster-api-provider-hetzner/actions) to create a draft release (this will take roughly 6 minutes).
@@ -28,14 +34,13 @@ Done 🥳
This is only needed if you want to manually release images.
1. Login to ghcr
-2. Do:
- - `make release-image`
+2. Execute `make release-image`
-### Versioning
+## Versioning
-See the [versioning documentation](./../../CONTRIBUTING.md#versioning) for more information.
+See the [versioning documentation](https://github.com/syself/cluster-api-provider-hetzner/blob/main/CONTRIBUTING.md#versioning) for more information.
-### Permissions
+## Permissions
Releasing requires a particular set of permissions.
diff --git a/docs/caph/04-developers/04-updating-kubernetes-version.md b/docs/caph/04-developers/04-updating-kubernetes-version.md
new file mode 100644
index 000000000..19c6c3e2d
--- /dev/null
+++ b/docs/caph/04-developers/04-updating-kubernetes-version.md
@@ -0,0 +1,10 @@
+---
+title: Updating Kubernetes Version
+---
+
+Please check the kubernetes version in the following files:
+
+- `node-image` dependencies (cri-o, kubelet, kubeadm etc.).
+- `cluster-template` (same as node-image but for cloud-init).
+- quickstart Guide.
+- [Supported Versions](/docs/caph/01-getting-started/01-introduction).
diff --git a/docs/developers/tilt.md b/docs/developers/tilt.md
deleted file mode 100644
index fdc07a994..000000000
--- a/docs/developers/tilt.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Reference of Tilt
-
- "allowed_contexts": [
- "kind-caph",
- ],
- "deploy_cert_manager": True,
- "deploy_observability": False,
- "preload_images_for_kind": True,
- "kind_cluster_name": "caph",
- "capi_version": "v1.7.3",
- "cabpt_version": "v0.5.5",
- "cacppt_version": "v0.4.10",
- "cert_manager_version": "v1.11.0",
- "kustomize_substitutions": {
- "HCLOUD_REGION": "fsn1",
- "CONTROL_PLANE_MACHINE_COUNT": "3",
- "WORKER_MACHINE_COUNT": "3",
- "KUBERNETES_VERSION": "v1.29.4",
- "HCLOUD_IMAGE_NAME": "test-image",
- "HCLOUD_CONTROL_PLANE_MACHINE_TYPE": "cpx31",
- "HCLOUD_WORKER_MACHINE_TYPE": "cpx31",
- "CLUSTER_NAME": "test",
- "HETZNER_SSH_PUB_PATH": "~/.ssh/test",
- "HETZNER_SSH_PRIV_PATH": "~/.ssh/test",
- "HETZNER_ROBOT_USER": "test",
- "HETZNER_ROBOT_PASSWORD": "pw"
- },
-| Key | Type | Default | Required | Description |
-|-----|-----|------|---------|-------------|
-| allowed_contexts | []string | ["kind-caph"] | no | A list of kubeconfig contexts Tilt is allowed to use. See the Tilt documentation on
-[allow_k8s_contexts](https://docs.tilt.dev/api.html#api.allow_k8s_contexts) for more details |
-| deploy_cert_manager | bool | true | no | If true, deploys cert-manager into the cluster for use for webhook registration |
-| deploy_observability | bool | false | no | If true, installs grafana, loki and promtail in the dev cluster. Grafana UI will be accessible via a link in the tilt console. Important! This feature requires the `helm` command to be available in the user's path |
-| preload_images_for_kind | bool | true | no | If set to true, uses `kind load docker-image` to preload images into a kind cluster |
-| kind_cluster_name | []object | "caph" | no | The name of the kind cluster to use when preloading images |
-| capi_version | string | "v1.7.3" | no | Version of CAPI |
-| cert_manager_version | string | "v1.11.0" | no | Version of cert manager |
diff --git a/docs/developers/updating-kubernetes-version.md b/docs/developers/updating-kubernetes-version.md
deleted file mode 100644
index 8aaedb57b..000000000
--- a/docs/developers/updating-kubernetes-version.md
+++ /dev/null
@@ -1,7 +0,0 @@
-# Updating Kubernetes Version
-
-Please check the kubernetes version in the following files:
-- node-image dependencies (cri-o, kubelet, kubeadm etc.).
-- cluster-template (same as node-image but for cloud-init).
-- quickstart Guide.
-- README; Supported Versions.
diff --git a/docs/pics/hetzner.png b/docs/pics/hetzner.png
deleted file mode 100644
index 50594b300..000000000
Binary files a/docs/pics/hetzner.png and /dev/null differ
diff --git a/docs/pics/robot_user.png b/docs/pics/robot_user.png
deleted file mode 100644
index e20a8a960..000000000
Binary files a/docs/pics/robot_user.png and /dev/null differ
diff --git a/docs/pics/tilt.png b/docs/pics/tilt.png
deleted file mode 100644
index 82b35d4ef..000000000
Binary files a/docs/pics/tilt.png and /dev/null differ
diff --git a/docs/reference/README.md b/docs/reference/README.md
deleted file mode 100644
index b5f53adef..000000000
--- a/docs/reference/README.md
+++ /dev/null
@@ -1,7 +0,0 @@
-## Object Reference
-
-In this object reference, we introduce all objects that are specific for this provider integration. The naming of objects, servers, machines, etc. can be confusing. Without claiming to be consistent throughout these docs, we would like to give an overview of how we name things here.
-
-First, there are some important counterparts of our objects and CAPI objects. `HetznerCluster` has CAPI's `Cluster` object. CAPI's `Machine` object is the counterpart of both `HCloudMachine` and `HetznerBareMetalMachine`. These two are objects of the provider integration that are reconciled by the `HCloudMachineController` and the `HetznerBareMetalMachineController` respectively. The `HCloudMachineController` checks whether there is a server in the HCloud API already and if not, buys/creates one that corresponds to a `HCloudMachine` object. The `HetznerBareMetalMachineController` does not buy new bare metal machines, but instead consumes a host of the inventory of `HetznerBareMetalHosts`, which have a one-to-one relationship to Hetzner dedicated/root/bare metal servers that have been bought manually by the user.
-
-Therefore, there is an important difference between the `HCloudMachine` object and a server in the HCloud API. For bare metal, we have even three terms: the `HetznerBareMetalMachine` object, the `HetznerBareMetalHost` object, and the actual bare metal server that can be accessed through Hetzner's robot API.
diff --git a/docs/reference/hcloud-machine-template.md b/docs/reference/hcloud-machine-template.md
deleted file mode 100644
index aeb6173ee..000000000
--- a/docs/reference/hcloud-machine-template.md
+++ /dev/null
@@ -1,18 +0,0 @@
-## HCloudMachineTemplate
-
-In ```HCloudMachineTemplate``` you can define all important properties for ```HCloudMachines```. ```HCloudMachines``` are reconciled by the ```HCloudMachineController```, which creates and deletes servers in Hetzner Cloud.
-
-### Overview of HCloudMachineTemplate.Spec
-| Key | Type | Default | Required | Description |
-|-----|-----|------|---------|-------------|
-| template.spec.providerID | string | | no | ProviderID set by controller |
-| template.spec.type | string | | yes | Desired server type of server in Hetzner's Cloud API. Example: cpx11 |
-| template.spec.imageName | string | | yes | Specifies desired image of server. ImageName can reference an image uploaded to Hetzner API in two ways: either directly as name of an image, or as label of an image (see [here](https://github.com/syself/cluster-api-provider-hetzner/blob/main/docs/topics/node-image.md) for more details) |
-| template.spec.sshKeys | object | | no | SSHKeys that are scoped to this machine |
-| template.spec.sshKeys.hcloud | []object | | no | SSH keys for HCloud |
-| template.spec.sshKeys.hcloud.name | string | | yes | Name of SSH key |
-| template.spec.sshKeys.hcloud.fingerprint | string | | no| Fingerprint of SSH key - used by the controller |
-| template.spec.placementGroupName | string | | no | Placement group of the machine in HCloud API, must be referencing an existing placement group |
-| template.spec.publicNetwork | object | {enableIPv4: true, enabledIPv6: true} | no | Specs about primary IP address of server. If both IPv4 and IPv6 are disabled, then the private network has to be enabled |
-| template.spec.publicNetwork.enableIPv4 | bool | true | no | Defines whether server has IPv4 address enabled. As Hetzner load balancers require an IPv4 address, this setting will be ignored and set to true if there is no private net. |
-| template.spec.publicNetwork.enableIPv6 | bool | true | no | Defines whether server has IPv6 address enabled |
diff --git a/docs/reference/hetzner-bare-metal-machine-template.md b/docs/reference/hetzner-bare-metal-machine-template.md
deleted file mode 100644
index d50cec522..000000000
--- a/docs/reference/hetzner-bare-metal-machine-template.md
+++ /dev/null
@@ -1,142 +0,0 @@
-## HetznerBareMetalMachineTemplate
-
-In `HetznerBareMetalMachineTemplate` you can define all important properties for the `HetznerBareMetalMachines`. `HetznerBareMetalMachines` are reconciled by the `HetznerBareMetalMachineController`, which DOES NOT create or delete Hetzner dedicated machines. Instead, it uses the inventory of `HetznerBareMetalHosts`. These hosts correspond to already existing bare metal servers, which get provisioned when selected by a `HetznerBareMetalMachine`.
-
-### Lifecycle of a HetznerBareMetalMachine
-
-#### Creating a HetznerBareMetalMachine
-
-Simply put, the specs of a `HetznerBareMetalMachine` consist of two parts. First, there is information about how the bare metal server is supposed to be provisioned. Second, there are properties where you can specify which host to select. If these selectors correspond to a host that is not consumed yet, then the `HetznerBareMetalMachine` transfers important information to the host object. This information is used to provision the host according to what you specified in the specs of `HetznerBareMetalMachineTemplate`. If a host has provisioned successfully, then the `HetznerBareMetalMachine` is considered to be ready.
-
-#### Deleting of a HetznerBareMetalMachine
-
-When the `HetznerBareMetalMachine` object gets deleted, it removes the information from the host that the latter used for provisioning. The host then triggers the deprovisioning. As soon as this has been completed, the `HetznerBareMetalMachineController` removes the owner and consumer reference of the host and deletes the finalizer of the machine, so that it can be finally deleted.
-
-#### Updating a HetznerBareMetalMachine
-
-Updating a `HetznerBareMetalMachineTemplate` is not possible. Instead, a new template should be created.
-
-## cloud-init and installimage
-
-Both in [installimage](https://docs.hetzner.com/robot/dedicated-server/operating-systems/installimage/) and cloud-init the ports used for SSH can be changed, e.g. with the following code snippet:
-
-```
-sed -i -e '/^\(#\|\)Port/s/^.*$/Port 2223/' /etc/ssh/sshd_config
-```
-
-As the controller needs to know this to be able to successfully provision the server, these ports can be specified in `SSHSpec` of `HetznerBareMetalMachineTemplate`.
-
-When the port is changed in cloud-init, then we additionally need to use the following command to make sure that the change of ports takes immediate effect:
-`systemctl restart sshd`
-
-## Choosing the right host
-
-Via MatchLabels you can specify a certain label (key and value) that identifies the host. You get more flexibility with MatchExpressions. This allows decisions like "take any host that has the key "mykey" and let this key have either one of the values "val1", "val2", and "val3".
-
-### Overview of HetznerBareMetalMachineTemplate.Spec
-
-| Key | Type | Default | Required | Description |
-| -------------------------------------------------------------- | ------------------- | ----------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
-| template.spec.providerID | string | | no | Provider ID set by controller |
-| template.spec.installImage | object | | yes | Configuration used in autosetup |
-| template.spec.installImage.image | object | | yes | Defines image for bm machine. See below for details. |
-| template.spec.installImage.image.url | string | | no | Remote URL of image. Can be tar, tar.gz, tar.bz, tar.bz2, tar.xz, tgz, tbz, txz |
-| template.spec.installImage.image.name | string | | no | Name of the image |
-| template.spec.installImage.image.path | string | | no | Local path of a pre-installed image |
-| template.spec.installImage.postInstallScript | string | | no | PostInstallScript that is used for commands that will be executed after installing image |
-| template.spec.installImage.swraid | int | 0 | no | Enables or disables raid. Set 1 to enable |
-| template.spec.installImage.swraidLevel | int | 1 | no | Defines the software raid levels. Only relevant if raid is enabled. Pick one of 0,1,5,6,10 |
-| template.spec.installImage.partitions | []object | | yes | Partitions that should be created in installimage |
-| template.spec.installImage.partitions.mount | string | | yes | Mount defines the mount path of the filesystem |
-| template.spec.installImage.partitions.fileSystem | string | | yes | Filesystem that should be used. Can be ext2, ext3, ext4, btrfs, reiserfs, xfs, swap, or the name of the LVM volume group, if the partition is a VG |
-| template.spec.installImage.partitions.size | string | | yes | Size of the partition. Use 'all' to use all remaining space of the drive. M/G/T can be used as unit specifications for MiB, GiB, TiB |
-| template.spec.installImage.logicalVolumeDefinitions | []object | | no | Defines the logical volume definitions that should be created |
-| template.spec.installImage.logicalVolumeDefinitions.vg | string | | yes | Defines the vg name |
-| template.spec.installImage.logicalVolumeDefinitions.name | string | | yes | Defines the volume name |
-| template.spec.installImage.logicalVolumeDefinitions.mount | string | | yes | Defines the mount path |
-| template.spec.installImage.logicalVolumeDefinitions.fileSystem | string | | yes | Defines the file system |
-| template.spec.installImage.logicalVolumeDefinitions.size | string | | yes | Defines size with unit M/G/T or MiB/GiB/TiB |
-| template.spec.installImage.btrfsDefinitions | []object | | no | Defines the btrfs sub-volume definitions that should be created |
-| template.spec.installImage.btrfsDefinitions.volume | string | | yes | Defines the btrfs volume name |
-| template.spec.installImage.btrfsDefinitions.subvolume | string | | yes | Defines the btrfs sub-volume name |
-| template.spec.installImage.btrfsDefinitions.mount | string | | yes | Defines the btrfs mount path |
-| template.spec.hostSelector | object | | no | Options to select hosts with |
-| template.spec.hostSelector.matchLabels | map[string][string] | | no | Specify labels as key-value pairs that should be there in host object to select it |
-| template.spec.hostSelector.matchExpressions | []object | | no | Requirements using Kubernetes MatchExpressions |
-| template.spec.hostSelector.matchExpressions.key | string | | yes | Key of label that should be matched in host object |
-| template.spec.hostSelector.matchExpressions.operator | string | | yes | [Selection operator](https://pkg.go.dev/k8s.io/apimachinery@v0.23.4/pkg/selection?utm_source=gopls#Operator) |
-| template.spec.hostSelector.matchExpressions.values | []string | | yes | Values whose relation to the label value in the host machine is defined by the selection operator |
-| template.spec.sshSpec | object | | yes | SSH specs |
-| template.spec.sshSpec.secretRef | object | | yes | Reference to the secret where SSH key is stored |
-| template.spec.sshSpec.secretRef.name | string | | yes | Name of the secret |
-| template.spec.sshSpec.secretRef.key | object | | yes | Details about the keys used in the data of the secret |
-| template.spec.sshSpec.secretRef.key.name | string | | yes | Name is the key in the secret's data where the SSH key's name is stored |
-| template.spec.sshSpec.secretRef.key.publicKey | string | | yes | PublicKey is the key in the secret's data where the SSH key's public key is stored |
-| template.spec.sshSpec.secretRef.key.privateKey | string | | yes | PrivateKey is the key in the secret's data where the SSH key's private key is stored |
-| template.spec.sshSpec.portAfterInstallImage | int | 22 | no | PortAfterInstallImage specifies the port that can be used to reach the server via SSH after install image completed successfully |
-| template.spec.sshSpec.portAfterCloudInit | int | 22 (install image port) | no | PortAfterCloudInit specifies the port that can be used to reach the server via SSH after cloud init completed successfully |
-
-### installImage.image
-
-You must specify either name and url, or a local path.
-
-Example of an image provided by Hetzner via NFS:
-
-```
-image:
- path: /root/.oldroot/nfs//images/Ubuntu-2204-jammy-amd64-base.tar.gz
-```
-
-Example of an image provided by you via https. The script installimage of Hetzner parses the name to detect the version. It is
-recommended to follow their naming pattern.
-
-```
-image:
- name: Ubuntu-2204-jammy-amd64-custom
- url: https://user:pwd@example.com/images/Ubuntu-2204-jammy-amd64-custom.tar.gz
-
-```
-
-Example of pulling an image from an oci-registry:
-
-```
-image:
- name: Ubuntu-2204-jammy-amd64-custom
- url: oci://ghcr.io/myorg/images/Ubuntu-2204-jammy-amd64-custom:1.0.0-beta.2
-```
-
-If you need credentials to pull the image, then provide the environment variable `OCI_REGISTRY_AUTH_TOKEN` to the controller.
-
-You can provide the variable via a secret of the deployment `caph-controller-manager`:
-
-```
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- # ...
-spec:
- # ...
- template:
- spec:
- containers:
- - command:
- - /manager
- image: ghcr.io/syself/caph:vXXX
- env:
- - name: OCI_REGISTRY_AUTH_TOKEN
- valueFrom:
- secretKeyRef:
- name: my-oci-registry-secret # The name of the secret
- key: OCI_REGISTRY_AUTH_TOKEN # The key in the secret. Format: "user:pwd" or just "token"
- # ... other container specs
-```
-
-You can push an image to an oci-registry with a tool like [oras](https://oras.land):
-
-```
-oras push ghcr.io/myorg/images/Ubuntu-2204-jammy-amd64-custom:1.0.0-beta.2 \
- --artifact-type application/vnd.myorg.machine-image.v1 Ubuntu-2204-jammy-amd64-custom.tar.gz
-```
-
-
-
diff --git a/docs/reference/hetzner-bare-metal-remediation-template.md b/docs/reference/hetzner-bare-metal-remediation-template.md
deleted file mode 100644
index 2cd706b11..000000000
--- a/docs/reference/hetzner-bare-metal-remediation-template.md
+++ /dev/null
@@ -1,11 +0,0 @@
-## HetznerBareMetalRemediationTemplate
-
-In ```HetznerBareMetalRemediationTemplate``` you can define all important properties for ```HetznerBareMetalRemediations```. With this remediation, you can define a custom method for the manner of how Machine Health Checks treat the unhealthy objects - `HetznerBareMetalMachines` in this case. For more information about how to use remdiations, see [Advanced CAPH](/docs/topics/advanced-caph.md). ```HetznerBareMetalRemediations``` are reconciled by the ```HetznerBareMetalRemediationController```, which reconciles the remediatons and triggers the requested type of remediation on the relevant `HetznerBareMetalMachine`.
-
-### Overview of HetznerBareMetalRemediationTemplate.Spec
-| Key | Type | Default | Required | Description |
-|-----|-----|------|---------|-------------|
-| template.spec.strategy | object | | yes | Remediation strategy to be applied |
-| template.spec.strategy.type | string | Reboot | no | Type of the remediation strategy. At the moment, only "Reboot" is supported |
-| template.spec.strategy.retryLimit | int | 0 | no | Set maximum of remediation retries. Zero retries if not set. |
-| template.spec.strategy.timeout | string | | yes | Timeout of one remediation try. Should be of the form "10m", or "40s" |
diff --git a/docs/reference/hetzner-cluster.md b/docs/reference/hetzner-cluster.md
deleted file mode 100644
index c24917165..000000000
--- a/docs/reference/hetzner-cluster.md
+++ /dev/null
@@ -1,53 +0,0 @@
-## HetznerCluster
-
-In HetznerCluster you can define everything related to the general components of the cluster as well as those properties, which are valid cluster-wide.
-
-There are two different modes for the cluster. A pure HCloud cluster and a cluster that uses Hetzner dedicated (bare metal) servers, either as control planes or as workers. The HCloud cluster works with Kubeadm and supports private networks. In a cluster that includes bare metal servers there are no private networks, as this feature has not yet been integrated in cluster-api-provider-hetzner. Apart from SSH, the node image has to support cloud-init, which we use to provision the bare metal machines. In cluster with bare metal servers, you need to use [this CCM](https://github.com/syself/hetzner-cloud-controller-manager), as the official one does not support bare metal.
-
-[Here](/docs/topics/managing-ssh-keys.md) you can find more information regarding the handling of SSH keys. Some of them are specified in ```HetznerCluster``` to have them cluster-wide, others are machine-scoped.
-
-### Usage without HCloud Load Balancer
-It is also possible not to use the cloud load balancer from Hetzner. This is useful for setups with only one control plane, or if you have your own cloud load balancer. Using `controlPlaneLoadBalancer.enabled=false` prevents the creation of a hcloud load balancer. Then you need to configure `controlPlaneEndpoint.port=6443` & `controlPlaneEndpoint.host`, which should be a domain that has A records configured pointing to the control plane IP for example. If you are using your own load balancer, you need to point towards it and configure the load balancer to target the control planes of the cluster.
-
-## Overview of HetznerCluster.Spec
-| Key | Type | Default | Required | Description |
-|-----|-----|------|---------|-------------|
-| hcloudNetwork | object | | no | Specifies details about Hetzner cloud private networks |
-| hcloudNetwork.enabled | bool | | yes| States whether network should be enabled or disabled |
-| hcloudNetwork.cidrBlock | string | "10.0.0.0/16" | no | Defines the CIDR block |
-| hcloudNetwork.subnetCidrBlock | string | "10.0.0.0/24" | no | Defines the CIDR block of the subnet. Note that one subnet ist required |
-| hcloudNetwork.networkZone | string | "eu-central" | no | Defines the network zone. Must be eu-central, us-east or us-west |
-| controlPlaneRegions | []string | []string{fsn1} | no | This is the base for the failureDomains of the cluster |
-| sshKeys | object | | no | Cluster-wide SSH keys that serve as default for machines as well |
-| sshKeys.hcloud | []object | | no | SSH keys for hcloud |
-| sshKeys.hcloud.name | string | | yes | Name of SSH key |
-| sshKeys.hcloud.fingerprint | string | | no| Fingerprint of SSH key - used by the controller |
-| sshKeys.robotRescueSecretRef | object | | no | Reference to the secret where the SSH key for the rescue system is stored |
-| sshKeys.robotRescueSecretRef.name | string | | yes | Name of the secret |
-| sshKeys.robotRescueSecretRef.key | object | | yes | Details about the keys used in the data of the secret |
-| sshKeys.robotRescueSecretRef.key.name | string | | yes | Name is the key in the secret's data where the SSH key's name is stored |
-| sshKeys.robotRescueSecretRef.key.publicKey | string | | yes | PublicKey is the key in the secret's data where the SSH key's public key is stored |
-| sshKeys.robotRescueSecretRef.key.privateKey | string | | yes | PrivateKey is the key in the secret's data where the SSH key's private key is stored |
-| controlPlaneEndpoint | object | | no | Set by the controller. It is the endpoint to communicate with the control plane |
-| controlPlaneEndpoint.host | string | | yes | Defines host |
-| controlPlaneEndpoint.port | int32 | | yes | Defines port |
-|controlPlaneLoadBalancer | object | | yes | Defines specs of load balancer |
-|controlPlaneLoadBalancer.enabled | bool | true | no | Specifies if a load balancer should be created |
-|controlPlaneLoadBalancer.name | string | | no | Name of load balancer |
- |controlPlaneLoadBalancer.algorithm | string | round_robin | no | Type of load balancer algorithm. Either round_robin or least_connections |
-|controlPlaneLoadBalancer.type | string | lb11 | no | Type of load balancer. One of lb11, lb21, lb31 |
-|controlPlaneLoadBalancer.port| int | 6443 | no | Load balancer port. Must be in range 1-65535 |
-|controlPlaneLoadBalancer.extraServices| []object | | no | Defines extra services of load balancer |
-|controlPlaneLoadBalancer.extraServices.protocol | string | | yes | Defines protocol. Must be one of https, http, or tcp |
-|controlPlaneLoadBalancer.extraServices.listenPort | int | | yes | Defines listen port. Must be in range 1-65535 |
-|controlPlaneLoadBalancer.extraServices.destinationPort | int | | yes | Defines destination port. Must be in range 1-65535 |
-|hcloudPlacementGroup | []object | | no | List of placement groups that should be defined in Hetzner API |
-|hcloudPlacementGroup.name | string | | yes | Name of placement group |
-|hcloudPlacementGroup.type | string | type | no | Type of placement group. Hetzner only supports 'spread' |
-| hetznerSecret | object | | yes | Reference to secret where Hetzner API credentials are stored |
-| hetznerSecret.name | string | | yes | Name of secret |
-| hetznerSecret.key | object | | yes | Reference to the keys that are used in the secret, either `hcloudToken` or `hetznerRobotUser` and `hetznerRobotPassword` need to be specified |
-| hetznerSecret.key.hcloudToken | string | | no | Name of the key where the token for the Hetzner Cloud API is stored |
-| hetznerSecret.key.hetznerRobotUser | string | | no | Name of the key where the username for the Hetzner Robot API is stored |
-| hetznerSecret.key.hetznerRobotPassword | string | | no | Name of the key where the password for the Hetzner Robot API is stored |
-