The Piraeus Operator manages LINSTOR clusters in Kubernetes.
All components of the LINSTOR software stack can be managed by the operator and associated Helm chart:
- DRBD
- etcd cluster for LINSTOR
- LINSTOR
- LINSTOR CSI driver
- CSI snapshotter
- Stork scheduler with LINSTOR integration
The operator can be deployed with Helm v3 chart in /charts as follows:
- Prepare the hosts for DRBD deployment. There are several options:
- Install DRBD directly on the hosts as documented.
- Install the appropriate kernel headers package for your distribution and choose the appropriate kernel module injector. Then the operator will compile and load the required modules.
-
If you are deploying with images from private repositories, create a kubernetes secret to allow obtaining the images. This example will create a secret named
drbdiocred
:kubectl create secret docker-registry drbdiocred --docker-server=<SERVER> --docker-username=<YOUR_LOGIN> --docker-email=<YOUR_EMAIL> --docker-password=<YOUR_PASSWORD>
The name of this secret must match the one specified in the Helm values, by passing
--set drbdRepoCred=drbdiocred
to helm.
- Configure storage for the LINSTOR etcd instance.
There are various options for configuring the etcd instance for LINSTOR:
- Use an existing storage provisioner with a default
StorageClass
. - Use
hostPath
volumes. - Disable persistence for basic testing. This can be done by adding
--set etcd.persistentVolume.enabled=false
to thehelm install
command below.
- Use an existing storage provisioner with a default
-
Configure a basic storage setup for LINSTOR:
- Create storage pools from available devices. Recommended for simple set ups. Guide
- Create storage pools from existing LVM setup. Guide
Read the storage guide and configure as needed.
-
Read the guide on securing the deployment and configure as needed.
-
Read up on optional components and configure as needed.
-
Finally create a Helm deployment named
piraeus-op
that will set up everything.helm install piraeus-op ./charts/piraeus
You can pick from a number of example settings:
- default values (Kubernetes v1.17+)
values.yaml
- images optimized for CN
values.cn.yaml
. - override for Openshift
values-openshift.yaml
A full list of all available options to pass to helm can be found here.
- default values (Kubernetes v1.17+)
You can use the included Helm templates to create hostPath
persistent volumes.
Create as many PVs as needed to satisfy your configured etcd replicas
(default 1).
Create the hostPath
persistent volumes, substituting cluster node names
accordingly in the nodes=
option. For 1 replica:
helm install linstor-etcd ./charts/pv-hostpath --set "nodes={<NODE>}"
For 3 replicas:
helm install linstor-etcd ./charts/pv-hostpath --set "nodes={<NODE0>,<NODE1>,<NODE2>}"
Persistence for etcd is enabled by default.
Clusters with SELinux enabled hosts (for example: OpenShift clusters) need to relabel the created directory. This
can be done automatically by passing --set selinux=true
to the above helm install
command.
LINSTOR can connect to an existing PostgreSQL, MariaDB or etcd database. For instance, for a PostgresSQL instance with the following configuration:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: admin123
The Helm chart can be configured to use this database instead of deploying an etcd cluster by adding the following to the Helm install command:
--set etcd.enabled=false --set "operator.controller.dbConnectionURL=jdbc:postgresql://postgres/postgresdb?user=postgresadmin&password=admin123"
You can configure resource requests and limits for all deployed containers. Take a look at this example chart configuration.
Running multiple replicas of pods is recommended for high availability and fast error recovery. The following components can be started with multiple replicas:
- Operator: Set
operator.replicas
to the desired number of operator pods. - CSI: Set
csi.controllerReplicas
to the desired number of CSI Controller pods. - Linstor Controller: Set
operator.controller.replicas
to the desired number of LINSTOR controller pods. - CSI Snapshotter: Set
csi-snapshotter.replicas
to the desired number of CSI Snapshot Controller pods. - Etcd: Set
etcd.replicas
to the desired number of Etcd pods. - Stork: Set
stork.replicas
to the desired number of both Stork plugin and Stork scheduler pods.
You can influence the assignement of various components to specific nodes. See the scheduling guide.
To protect the storage infrastructure of the cluster from accidentally deleting vital components, it is necessary to perform some manual steps before deleting a Helm deployment.
-
Delete all volume claims managed by piraeus components. You can use the following command to get a list of volume claims managed by Piraeus. After checking that non of the listed volumes still hold needed data, you can delete them using the generated
kubectl delete
command.$ kubectl get pvc --all-namespaces -o=jsonpath='{range .items[?(@.metadata.annotations.volume\.beta\.kubernetes\.io/storage-provisioner=="linstor.csi.linbit.com")]}kubectl delete pvc --namespace {.metadata.namespace} {.metadata.name}{"\n"}{end}' kubectl delete pvc --namespace default data-mysql-0 kubectl delete pvc --namespace default data-mysql-1 kubectl delete pvc --namespace default data-mysql-2
WARNING These volumes, once deleted, cannot be recovered.
-
Delete the LINSTOR controller and satellite resources.
Deployment of LINSTOR satellite and controller is controlled by the
linstorsatelliteset
andlinstorcontroller
resources. You can delete the resources associated with your deployment usingkubectl
kubectl delete linstorsatelliteset <helm-deploy-name>-ns kubectl delete linstorcontroller <helm-deploy-name>-cs
After a short wait, the controller and satellite pods should terminate. If they continue to run, you can check the above resources for errors (they are only removed after all associated pods terminate)
-
Delete the Helm deployment.
If you removed all PVCs and all LINSTOR pods have terminated, you can uninstall the helm deployment
helm uninstall piraeus-op
However due to the Helm's current policy, the Custom Resource Definitions named
linstorcontroller
andlinstorsatelliteset
will not be deleted by the command.More information regarding Helm's current position on CRD's can be found here.
The operator must be deployed within the cluster in order for it to have access to the controller endpoint, which is a kubernetes service.
If you are deploying with images from a private repository, create a kubernetes
secret to allow obtaining the images. Create a secret named drbdiocred
like
this:
kubectl create secret docker-registry drbdiocred --docker-server=<SERVER> --docker-username=<YOUR LOGIN> --docker-email=<YOUR EMAIL> --docker-password=<YOUR PASSWORD>
First you need to create the resource definitions
kubectl create -f charts/piraeus/crds/piraeus.linbit.com_linstorcsidrivers_crd.yaml
kubectl create -f charts/piraeus/crds/piraeus.linbit.com_linstorsatellitesets_crd.yaml
kubectl create -f charts/piraeus/crds/piraeus.linbit.com_linstorcontrollers_crd.yaml
Then, take a look at the files in deploy/piraeus
and make changes as
you see fit. For example, you can edit the storage pool configuration by editing
operator-satelliteset.yaml
like shown in the storage guide.
Now you can finally deploy the LINSTOR cluster with:
kubectl create -Rf deploy/piraeus/
Please see the dedicated UPGRADE document
If you'd like to contribute, please visit https://github.com/piraeusdatastore/piraeus-operator and look through the issues to see if there is something you'd like to work on. If you'd like to contribute something not in an existing issue, please open a new issue beforehand.
If you'd like to report an issue, please use the issues interface in this project's github page.
This project is built using the operator-sdk (version 0.19.4). Please refer to the documentation for the sdk.
Apache 2.0