From 34962e1bd503720c9a990c1220d6720980886e3e Mon Sep 17 00:00:00 2001
From: yuanyuan zhang <111744220+michelle-0808@users.noreply.github.com>
Date: Tue, 7 Jan 2025 17:45:33 +0800
Subject: [PATCH] docs: update kbcli cluster create, yaml, and monitoring docs
in release-1.0 (#8729)
(cherry picked from commit 0c9a9740d78fe47f6ea09e182f4b183ee1d438cb)
---
...e-and-connect-an-apecloud-mysql-cluster.md | 101 ++--
.../delete-mysql-cluster.md | 7 +-
.../kubeblocks-for-apecloud-mysql.md | 2 +-
.../manage-elasticsearch.md | 146 +++--
.../create-a-kafka-cluster.md | 330 +++++------
.../delete-kafka-cluster.md | 7 +-
.../kubeblocks-for-milvus/manage-milvus.md | 517 ++++++++----------
...create-and-connect-to-a-mongodb-cluster.md | 88 +--
.../delete-mongodb-cluster.md | 7 +-
.../kubeblocks-for-mongodb.md | 2 +-
.../create-and-connect-a-mysql-cluster.md | 98 ++--
.../delete-mysql-cluster.md | 7 +-
.../kubeblocks-for-mysql-community-edition.md | 2 +-
...create-and-connect-a-postgresql-cluster.md | 105 ++--
.../delete-a-postgresql-cluster.md | 7 +-
.../kubeblocks-for-postgresql.md | 2 +-
.../create-pulsar-cluster-on-kubeblocks.md | 214 ++++----
.../delete-a-pulsar-cluster.md | 7 +-
.../kubeblocks-for-pulsar.md | 2 +-
.../kubeblocks-for-qdrant/manage-qdrant.md | 103 ++--
.../manage-rabbitmq.md | 97 ++--
.../create-and-connect-a-redis-cluster.md | 118 ++--
.../delete-a-redis-cluster.md | 7 +-
.../kubeblocks-for-redis.md | 2 +-
.../manage-starrocks.md | 145 +----
.../observability/monitor-database.md | 203 +++++--
...e-and-connect-an-apecloud-mysql-cluster.md | 103 ++--
.../delete-mysql-cluster.md | 3 +-
.../kubeblocks-for-apecloud-mysql.md | 2 +-
.../manage-elasticsearch.md | 126 +++--
.../create-a-kafka-cluster.md | 288 +++++-----
.../delete-kafka-cluster.md | 3 +-
.../kubeblocks-for-kafka.md | 2 +-
.../kubeblocks-for-milvus/manage-milvus.md | 511 ++++++++---------
...create-and-connect-to-a-mongodb-cluster.md | 80 +--
.../delete-a-mongodb-cluster.md | 3 +-
.../kubeblocks-for-mongodb.md | 2 +-
.../create-and-connect-a-mysql-cluster.md | 102 ++--
.../delete-mysql-cluster.md | 3 +-
.../kubeblocks-for-mysql-community-edition.md | 2 +-
...create-and-connect-a-postgresql-cluster.md | 95 ++--
.../delete-a-postgresql-cluster.md | 3 +-
.../kubeblocks-for-postgresql.md | 2 +-
.../create-pulsar-cluster-on-kb.md | 210 +++----
.../delete-pulsar-cluster.md | 3 +-
.../kubeblocks-for-pulsar.md | 2 +-
.../kubeblocks-for-qdrant/manage-qdrant.md | 107 ++--
.../manage-rabbitmq.md | 87 +--
.../create-and-connect-to-a-redis-cluster.md | 112 ++--
.../delete-a-redis-cluster.md | 3 +-
.../kubeblocks-for-redis.md | 2 +-
.../manage-starrocks.md | 132 +----
.../observability/monitor-database.md | 201 +++++--
53 files changed, 2288 insertions(+), 2227 deletions(-)
diff --git a/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md b/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md
index fa35d59de86..6ce4662b8a0 100644
--- a/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md
+++ b/docs/user_docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md
@@ -105,71 +105,59 @@ KubeBlocks supports creating two types of ApeCloud MySQL clusters: Standalone an
KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a RaftGroup Cluster.
- If you only have one node for deploying a RaftGroup Cluster, set `spec.affinity.topologyKeys` as `null`. But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
+ If you only have one node for deploying a RaftGroup Cluster, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
```yaml
cat <
-KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating an Elasticsearch cluster.
+KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating an Elasticsearch cluster with multiple nodes. For more examples, refer to [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/elasticsearch).
+
+If you only have one node for deploying a cluster with multiple nodes, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
```yaml
cat <
-1. Create a Kafka cluster. If you only have one node for deploying a cluster with multiple replicas, set `spec.affinity.topologyKeys` as `null`. But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
+1. Create a Kafka cluster. If you only have one node for deploying a cluster with multiple replicas, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
+
+ For more cluster examples, refer to [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/kafka).
* Create a Kafka cluster in combined mode.
- ```yaml
- # create kafka in combined mode
- kubectl apply -f - <
-
-
-
-KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Milvus cluster.
+KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Milvus cluster. If you only have one node for deploying a cluster with multiple replicas, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
```yaml
cat <
-
-
-
-***Steps***
-
-1. Execute the following command to create a Milvus cluster.
-
- ```bash
- kbcli cluster create mycluster --cluster-definition=milvus-2.3.2 -n demo
- ```
-
-:::note
-
-If you want to customize your cluster specifications, `kbcli` provides various options, such as setting cluster version, termination policy, CPU, and memory. You can view these options by adding `--help` or `-h` flag.
-
-```bash
-kbcli cluster create milvus --help
-
-kbcli cluster create milvus -h
-```
-
-:::
-
-2. Check whether the cluster is created successfully.
-
- ```bash
- kbcli cluster list -n demo
- >
- NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
- mycluster demo milvus-2.3.2 Delete Running Jul 05,2024 17:35 UTC+0800
- ```
-
-3. Check the cluster information.
-
- ```bash
- kbcli cluster describe mycluster -n demo
- >
- Name: milvus Created Time: Jul 05,2024 17:35 UTC+0800
- NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY
- demo milvus-2.3.2 Running Delete
-
- Endpoints:
- COMPONENT MODE INTERNAL EXTERNAL
- milvus ReadWrite milvus-milvus.default.svc.cluster.local:19530
- minio ReadWrite milvus-minio.default.svc.cluster.local:9000
- proxy ReadWrite milvus-proxy.default.svc.cluster.local:19530
- milvus-proxy.default.svc.cluster.local:9091
-
- Topology:
- COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME
- etcd milvus-etcd-0 Running Jul 05,2024 17:35 UTC+0800
- minio milvus-minio-0 Running Jul 05,2024 17:35 UTC+0800
- milvus milvus-milvus-0 Running Jul 05,2024 17:35 UTC+0800
- indexnode milvus-indexnode-0 Running Jul 05,2024 17:35 UTC+0800
- mixcoord milvus-mixcoord-0 Running Jul 05,2024 17:35 UTC+0800
- querynode milvus-querynode-0 Running Jul 05,2024 17:35 UTC+0800
- datanode milvus-datanode-0 Running Jul 05,2024 17:35 UTC+0800
- proxy milvus-proxy-0 Running Jul 05,2024 17:35 UTC+0800
-
- Resources Allocation:
- COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
- milvus false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- etcd false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- minio false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- proxy false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- mixcoord false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- datanode false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- indexnode false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- querynode false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
-
- Images:
- COMPONENT TYPE IMAGE
- milvus milvus milvusdb/milvus:v2.3.2
- etcd etcd docker.io/milvusdb/etcd:3.5.5-r2
- minio minio docker.io/minio/minio:RELEASE.2022-03-17T06-34-49Z
- proxy proxy milvusdb/milvus:v2.3.2
- mixcoord mixcoord milvusdb/milvus:v2.3.2
- datanode datanode milvusdb/milvus:v2.3.2
- indexnode indexnode milvusdb/milvus:v2.3.2
- querynode querynode milvusdb/milvus:v2.3.2
-
- Show cluster events: kbcli cluster list-events -n demo milvus
- ```
-
-
-
-
-
## Scale
Currently, KubeBlocks supports vertically scaling a Milvus cluster.
@@ -909,10 +857,9 @@ The termination policy determines how a cluster is deleted.
| **terminationPolicy** | **Deleting Operation** |
|:----------------------|:-------------------------------------------------|
-| `DoNotTerminate` | `DoNotTerminate` blocks delete operation. |
-| `Halt` | `Halt` deletes Cluster resources like Pods and Services but retains Persistent Volume Claims (PVCs), allowing for data preservation while stopping other operations. Halt policy is deprecated in v0.9.1 and will have same meaning as DoNotTerminate. |
-| `Delete` | `Delete` extends the Halt policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data. |
-| `WipeOut` | `WipeOut` deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, especially in non-production environments, to avoid irreversible data loss. |
+| `DoNotTerminate` | `DoNotTerminate` prevents deletion of the Cluster. This policy ensures that all resources remain intact. |
+| `Delete` | `Delete` deletes Cluster resources like Pods, Services, and Persistent Volume Claims (PVCs), leading to a thorough cleanup while removing all persistent data. |
+| `WipeOut` | `WipeOut` is an aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, primarily in non-production environments to avoid irreversible data loss. |
To check the termination policy, execute the following command.
diff --git a/docs/user_docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md b/docs/user_docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md
index 4b8b340a645..d85d6d3dcb5 100644
--- a/docs/user_docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md
+++ b/docs/user_docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md
@@ -100,57 +100,58 @@ KubeBlocks supports creating two types of MongoDB clusters: Standalone and Repli
KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a MongoDB Standalone.
- If you only have one node for deploying a ReplicaSet Cluster, set `spec.affinity.topologyKeys` as `null`. But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
+ If you only have one node for deploying a ReplicaSet Cluster, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
```yaml
cat <
-1. Create a PostgreSQL cluster.
+1. Create a PostgreSQL cluster. If you only have one node for deploying a Replication Cluster, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Replication.
```yaml
cat <-postgresql`. For example, if your cluster name is `mycluster`, the value would be `mycluster-postgresql`. Replace `mycluster` with your actual cluster name as needed. |
+ | `spec.componentSpecs.replicas` | It specifies the number of replicas of the component. |
+ | `spec.componentSpecs.resources` | It specifies the resources required by the Component. |
+ | `spec.componentSpecs.volumeClaimTemplates` | It specifies a list of PersistentVolumeClaim templates that define the storage requirements for the Component. |
+ | `spec.componentSpecs.volumeClaimTemplates.name` | It refers to the name of a volumeMount defined in `componentDefinition.spec.runtime.containers[*].volumeMounts`. |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | It is the name of the StorageClass required by the claim. If not specified, the StorageClass annotated with `storageclass.kubernetes.io/is-default-class=true` will be used by default. |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | You can set the storage size as needed. |
+
+ For more API fields and descriptions, refer to the [API Reference](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster).
KubeBlocks operator watches for the `Cluster` CRD and creates the cluster and all dependent resources. You can get all the resources created by the cluster with `kubectl get all,secret,rolebinding,serviceaccount -l app.kubernetes.io/instance=mycluster -n demo`.
@@ -209,16 +205,15 @@ KubeBlocks supports creating two types of PostgreSQL clusters: Standalone and Re
kbcli cluster create postgresql -h
```
- For example, you can create a Replication Cluster with the `--replicas` flag.
-
- ```bash
- kbcli cluster create postgresql mycluster --replicas=2 -n demo
- ```
-
- If you only have one node for deploying a Replication Cluster, set the `--topology-keys` as `null` when creating a Replication Cluster. But you should note that for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
+ If you only have one node for deploying a Replication Cluster, you can configure the cluster affinity by setting `--pod-anti-affinity`, `--tolerations`, and `--topology-keys` when creating a Replication Cluster. But you should note that for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability. For example,
```bash
- kbcli cluster create postgresql mycluster --replicas=2 --availability-policy='none' -n demo
+ kbcli cluster create postgresql mycluster \
+ --mode='replication' \
+ --pod-anti-affinity='Preferred' \
+ --tolerations='node-role.kubeblocks.io/data-plane:NoSchedule' \
+ --topology-keys='null' \
+ --namespace demo
```
2. Verify whether this cluster is created successfully.
diff --git a/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md b/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md
index fc852418627..cbf896fb860 100644
--- a/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md
+++ b/docs/user_docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md
@@ -21,10 +21,9 @@ The termination policy determines how a cluster is deleted.
| **terminationPolicy** | **Deleting Operation** |
|:----------------------|:-------------------------------------------------|
-| `DoNotTerminate` | `DoNotTerminate` blocks delete operation. |
-| `Halt` | `Halt` deletes Cluster resources like Pods and Services but retains Persistent Volume Claims (PVCs), allowing for data preservation while stopping other operations. Halt policy is deprecated in v0.9.1 and will have same meaning as DoNotTerminate. |
-| `Delete` | `Delete` extends the Halt policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data. |
-| `WipeOut` | `WipeOut` deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, especially in non-production environments, to avoid irreversible data loss. |
+| `DoNotTerminate` | `DoNotTerminate` prevents deletion of the Cluster. This policy ensures that all resources remain intact. |
+| `Delete` | `Delete` deletes Cluster resources like Pods, Services, and Persistent Volume Claims (PVCs), leading to a thorough cleanup while removing all persistent data. |
+| `WipeOut` | `WipeOut` is an aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, primarily in non-production environments to avoid irreversible data loss. |
To check the termination policy, execute the following command.
diff --git a/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md b/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md
index e96e3a02960..4030a15dce3 100644
--- a/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md
+++ b/docs/user_docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md
@@ -7,7 +7,7 @@ sidebar_position: 1
# KubeBlocks for PostgreSQL
-This tutorial illustrates how to create and manage a PostgreSQL cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/postgresql).
+This tutorial illustrates how to create and manage a PostgreSQL cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/postgresql).
* [Introduction](./introduction/introduction.md)
* [Cluster Management](./cluster-management/create-and-connect-a-postgresql-cluster.md)
diff --git a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md
index d69106df377..fd6ea875021 100644
--- a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md
+++ b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kubeblocks.md
@@ -78,107 +78,123 @@ Refer to the [Pulsar official document](https://pulsar.apache.org/docs/3.1.x/) f
## Create Pulsar cluster
-1. Create the Pulsar cluster template file `values-production.yaml` for `helm` locally.
-
- Copy the following information to the local file `values-production.yaml`.
-
- ```bash
- ## Bookies configuration
- bookies:
- resources:
- limits:
- memory: 8Gi
- requests:
- cpu: 2
- memory: 8Gi
-
- persistence:
- data:
- storageClassName: kb-default-sc
- size: 128Gi
- log:
- storageClassName: kb-default-sc
- size: 64Gi
-
- ## Zookeeper configuration
- zookeeper:
- resources:
- limits:
- memory: 2Gi
- requests:
- cpu: 1
- memory: 2Gi
-
- persistence:
- data:
- storageClassName: kb-default-sc
- size: 20Gi
- log:
- storageClassName: kb-default-sc
- size: 20Gi
-
- broker:
- replicaCount: 3
- resources:
- limits:
- memory: 8Gi
- requests:
- cpu: 2
- memory: 8Gi
+1. Create a Pulsar cluster in basic mode. For other cluster modes, check out the examples provided in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar). If you only have one node for deploying a Pulsar Cluster, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
+
+ ```yaml
+ cat <,bookies.persistence.log.storageClassName=,zookeeper.persistence.data.storageClassName=,zookeeper.persistence.log.storageClassName= --namespace=demo
- ```
-
- You can specify the storage name ``.
-
-3. Verify the cluster created.
+ | Field | Definition |
+ |---------------------------------------|--------------------------------------|
+ | `spec.terminationPolicy` | It is the policy of cluster termination. Valid values are `DoNotTerminate`, `Delete`, `WipeOut`. For the detailed definition, you can refer to [Termination Policy](./delete-a-pulsar-cluster.md#termination-policy). |
+ | `spec.clusterDef` | It specifies the name of the ClusterDefinition to use when creating a Cluster. **Note: DO NOT UPDATE THIS FIELD**. The value must be `pulsar` to create a Pulsar Cluster. |
+ | `spec.topology` | It specifies the name of the ClusterTopology to be used when creating the Cluster. |
+ | `spec.services` | It defines a list of additional Services that are exposed by a Cluster. |
+ | `spec.componentSpecs` | It is the list of ClusterComponentSpec objects that define the individual Components that make up a Cluster. This field allows customized configuration of each component within a cluster. |
+ | `spec.componentSpecs.serviceVersion` | It specifies the version of the Service expected to be provisioned by this Component. Valid options are [2.11.2,3.0.2]. |
+ | `spec.componentSpecs.disableExporter` | It determines whether metrics exporter information is annotated on the Component's headless Service. Valid options are [true, false]. |
+ | `spec.componentSpecs.replicas` | It specifies the amount of replicas of the component. |
+ | `spec.componentSpecs.resources` | It specifies the resources required by the Component. |
+ | `spec.componentSpecs.volumeClaimTemplates` | It specifies a list of PersistentVolumeClaim templates that define the storage requirements for the Component. |
+ | `spec.componentSpecs.volumeClaimTemplates.name` | It refers to the name of a volumeMount defined in `componentDefinition.spec.runtime.containers[*].volumeMounts`. |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | It is the name of the StorageClass required by the claim. If not specified, the StorageClass annotated with `storageclass.kubernetes.io/is-default-class=true` will be used by default. |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | You can set the storage size as needed. |
+
+ For more API fields and descriptions, refer to the [API Reference](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster).
+
+2. Verify the cluster created.
```bash
kubectl get cluster mycluster -n demo
diff --git a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md
index da885db967d..51e4df68fdf 100644
--- a/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md
+++ b/docs/user_docs/kubeblocks-for-pulsar/cluster-management/delete-a-pulsar-cluster.md
@@ -21,10 +21,9 @@ The termination policy determines how a cluster is deleted.
| **terminationPolicy** | **Deleting Operation** |
|:----------------------|:-------------------------------------------------|
-| `DoNotTerminate` | `DoNotTerminate` blocks delete operation. |
-| `Halt` | `Halt` deletes Cluster resources like Pods and Services but retains Persistent Volume Claims (PVCs), allowing for data preservation while stopping other operations. Halt policy is deprecated in v0.9.1 and will have same meaning as DoNotTerminate. |
-| `Delete` | `Delete` extends the Halt policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data. |
-| `WipeOut` | `WipeOut` deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, especially in non-production environments, to avoid irreversible data loss. |
+| `DoNotTerminate` | `DoNotTerminate` prevents deletion of the Cluster. This policy ensures that all resources remain intact. |
+| `Delete` | `Delete` deletes Cluster resources like Pods, Services, and Persistent Volume Claims (PVCs), leading to a thorough cleanup while removing all persistent data. |
+| `WipeOut` | `WipeOut` is an aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, primarily in non-production environments to avoid irreversible data loss. |
To check the termination policy, execute the following command.
diff --git a/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md b/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md
index cb19cd2ec36..8822b4abac8 100644
--- a/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md
+++ b/docs/user_docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md
@@ -7,7 +7,7 @@ sidebar_position: 1
# KubeBlocks for Pulsar
-This tutorial illustrates how to create and manage a Pulsar cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/pulsar).
+This tutorial illustrates how to create and manage a Pulsar cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar).
* [Cluster Management](./cluster-management/create-pulsar-cluster-on-kubeblocks.md)
* [Configuration](./configuration/configuration.md)
diff --git a/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md b/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md
index 11b76d2ce83..d4b03b14647 100644
--- a/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md
+++ b/docs/user_docs/kubeblocks-for-qdrant/manage-qdrant.md
@@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem';
The popularity of generative AI (Generative AI) has aroused widespread attention and completely ignited the vector database (Vector Database) market. Qdrant (read: quadrant) is a vector similarity search engine and vector database. It provides a production-ready service with a convenient API to store, search, and manage points—vectors with an additional payload Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural-network or semantic-based matching, faceted search, and other applications.
-KubeBlocks supports the management of Qdrant. This tutorial illustrates how to create and manage a Qdrant cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/qdrant).
+KubeBlocks supports the management of Qdrant. This tutorial illustrates how to create and manage a Qdrant cluster by `kbcli`, `kubectl` or a YAML file. You can find the YAML examples in [the GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/qdrant).
## Before you start
@@ -34,67 +34,58 @@ KubeBlocks supports the management of Qdrant. This tutorial illustrates how to c
-KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Qdrant Replication cluster. Primary and Secondary are distributed on different nodes by default. But if you only have one node for deploying a Replication Cluster, set `spec.affinity.topologyKeys` as `null`.
+KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Qdrant Replication cluster. Primary and Secondary are distributed on different nodes by default. But if you only have one node for deploying a Replication Cluster, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
```yaml
cat <
-KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Standalone.
+KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a Replication cluster. KubeBlocks also supports creating a Redis cluster in other modes. You can refer to the examples provided in the [GitHub repository](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/redis).
+
+If you only have one node for deploying a Replication Cluster, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
```yaml
cat <
-
-
-
-KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a StarRocks cluster.
+KubeBlocks implements a `Cluster` CRD to define a cluster. Here is an example of creating a StarRocks cluster. If you only have one node for deploying a cluster with multiple replicas, configure the cluster affinity by setting `spec.schedulingPolicy` or `spec.componentSpecs.schedulingPolicy`. For details, you can refer to the [API docs](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy). But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.
```yaml
cat <
-
-
-
-***Steps***
-
-1. Execute the following command to create a StarRocks cluster.
-
- ```bash
- kbcli cluster create mycluster --cluster-definition=starrocks -n demo
- ```
-
- You can also create a cluster with specified CPU, memory and storage values.
-
- ```bash
- kbcli cluster create mycluster --cluster-definition=starrocks --set cpu=1,memory=2Gi,storage=10Gi -n demo
- ```
-
-:::note
-
-If you want to customize your cluster specifications, `kbcli` provides various options, such as setting cluster version, termination policy, CPU, and memory. You can view these options by adding `--help` or `-h` flag.
-
-```bash
-kbcli cluster create --help
-kbcli cluster create -h
-```
-
-:::
-
-2. Check whether the cluster is created successfully.
-
- ```bash
- kbcli cluster list -n demo
- >
- NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
- mycluster demo starrocks starrocks-3.1.1 Delete Running Jul 17,2024 19:06 UTC+0800
- ```
-
-3. Check the cluster information.
-
- ```bash
- kbcli cluster describe mycluster -n demo
- >
- Name: mycluster Created Time: Jul 17,2024 19:06 UTC+0800
- NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY
- demo starrocks starrocks-3.1.1 Running Delete
-
- Endpoints:
- COMPONENT MODE INTERNAL EXTERNAL
- fe ReadWrite mycluster-fe.default.svc.cluster.local:9030
-
- Topology:
- COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME
- be mycluster-be-0 Running minikube/192.168.49.2 Jul 17,2024 19:06 UTC+0800
- fe mycluster-fe-0 Running minikube/192.168.49.2 Jul 17,2024 19:06 UTC+0800
-
- Resources Allocation:
- COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
- fe false 1 / 1 1Gi / 1Gi data:20Gi standard
- be false 1 / 1 1Gi / 1Gi data:20Gi standard
-
- Images:
- COMPONENT TYPE IMAGE
- fe fe apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/fe-ubuntu:2.5.4
- be be apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/fe-ubuntu:2.5.4
-
- Show cluster events: kbcli cluster list-events -n demo mycluster
- ```
-
-
-
-
-
## Scale
### Scale vertically
@@ -946,10 +856,9 @@ The termination policy determines how a cluster is deleted.
| **terminationPolicy** | **Deleting Operation** |
|:----------------------|:-------------------------------------------------|
-| `DoNotTerminate` | `DoNotTerminate` blocks delete operation. |
-| `Halt` | `Halt` deletes Cluster resources like Pods and Services but retains Persistent Volume Claims (PVCs), allowing for data preservation while stopping other operations. Halt policy is deprecated in v0.9.1 and will have same meaning as DoNotTerminate. |
-| `Delete` | `Delete` extends the Halt policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data. |
-| `WipeOut` | `WipeOut` deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, especially in non-production environments, to avoid irreversible data loss. |
+| `DoNotTerminate` | `DoNotTerminate` prevents deletion of the Cluster. This policy ensures that all resources remain intact. |
+| `Delete` | `Delete` deletes Cluster resources like Pods, Services, and Persistent Volume Claims (PVCs), leading to a thorough cleanup while removing all persistent data. |
+| `WipeOut` | `WipeOut` is an aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, primarily in non-production environments to avoid irreversible data loss. |
To check the termination policy, execute the following command.
diff --git a/docs/user_docs/observability/monitor-database.md b/docs/user_docs/observability/monitor-database.md
index b587ef05809..22c21eae63c 100644
--- a/docs/user_docs/observability/monitor-database.md
+++ b/docs/user_docs/observability/monitor-database.md
@@ -113,47 +113,40 @@ Make sure `spec.componentSpecs.disableExporter` is set to `false` when creating
```yaml
cat <
+
+
+
+ You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/apecloud-mysql/pod-monitor.yaml).
```yaml
kubectl apply -f - <
+
+
+
+ You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/mysql/pod-monitor.yaml).
+
+ ```yaml
+ kubectl apply -f - <
+
+
+
+ You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/postgresql/pod-monitor.yaml).
+
+ ```yaml
+ kubectl apply -f - <
+
+
+
+ You can also find the latest example YAML file in the [KubeBlocks Addons repo](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/redis/pod-monitor.yaml).
+
+ ```yaml
+ kubectl apply -f - <
+
+
+
3. Access the Grafana dashboard.
Log in to the Grafana dashboard and import the dashboard.
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md b/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md
index 6878db39579..d1d36fd1f69 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-apecloud-mysql/cluster-management/create-and-connect-an-apecloud-mysql-cluster.md
@@ -105,71 +105,59 @@ KubeBlocks 支持创建两种类型的 ApeCloud MySQL 集群:单机版(Stand
KubeBlocks 通过 `Cluster` 定义集群。以下是创建 ApeCloud MySQL 集群版的示例。
- 如果您只有一个节点可用于部署集群版,可将 `spec.affinity.topologyKeys` 设置为 `null`。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
+ 如果您只有一个节点可用于部署集群版,可设置 `spec.schedulingPolicy` 或 `spec.componentSpecs.schedulingPolicy`,具体可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy)。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
```yaml
cat <
-KubeBlocks 通过 `Cluster` 定义集群。以下是创建 Elasticsearch 集群的示例。Pod 默认分布在不同节点。但如果您只有一个节点可用于部署集群,可将 `spec.affinity.topologyKeys` 设置为 `null`。
-
-:::note
-
-生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
-
-:::
+KubeBlocks 通过 `Cluster` 定义集群。以下是创建 Elasticsearch 集群的示例。KubeBlocks 还支持创建Pod 默认分布在不同节点。如果您只有一个节点可用于部署集群,可设置 `spec.schedulingPolicy` 或 `spec.componentSpecs.schedulingPolicy`,具体可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy)。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
```yaml
cat <
-
-
-
-KubeBlocks 通过 `Cluster` 定义集群。以下是创建 Milvus 集群的示例。Pod 默认分布在不同节点。但如果您只有一个节点可用于部署集群,可将 `spec.affinity.topologyKeys` 设置为 `null`。
-
-:::note
-
-生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
-
-:::
+KubeBlocks 通过 `Cluster` 定义集群。以下是创建 Milvus 集群的示例。Pod 默认分布在不同节点。如果您只有一个节点可用于部署多副本集群,可设置 `spec.schedulingPolicy` 或 `spec.componentSpecs.schedulingPolicy`,具体可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy)。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
```yaml
cat <
-
-
-
-1. 创建一个 Milvus 集群。
-
- ```bash
- kbcli cluster create mycluster --cluster-definition=milvus-2.3.2 -n demo
- ```
-
- 如果您需要自定义集群规格,kbcli 也提供了诸多参数,如支持设置引擎版本、终止策略、CPU、内存规格。您可通过在命令结尾添加 `--help` 或 `-h` 来查看具体说明。比如,
-
- ```bash
- kbcli cluster create milvus --help
-
- kbcli cluster create milvus -h
- ```
-
-2. 检查集群是否已创建。
-
- ```bash
- kbcli cluster list -n demo
- >
- NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
- mycluster demo milvus-2.3.2 Delete Running Jul 05,2024 17:35 UTC+0800
- ```
-
-3. 查看集群信息。
-
- ```bash
- kbcli cluster describe mycluster -n demo
- >
- Name: milvus Created Time: Jul 05,2024 17:35 UTC+0800
- NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY
- demo milvus-2.3.2 Running Delete
-
- Endpoints:
- COMPONENT MODE INTERNAL EXTERNAL
- milvus ReadWrite milvus-milvus.default.svc.cluster.local:19530
- minio ReadWrite milvus-minio.default.svc.cluster.local:9000
- proxy ReadWrite milvus-proxy.default.svc.cluster.local:19530
- milvus-proxy.default.svc.cluster.local:9091
-
- Topology:
- COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME
- etcd milvus-etcd-0 Running Jul 05,2024 17:35 UTC+0800
- minio milvus-minio-0 Running Jul 05,2024 17:35 UTC+0800
- milvus milvus-milvus-0 Running Jul 05,2024 17:35 UTC+0800
- indexnode milvus-indexnode-0 Running Jul 05,2024 17:35 UTC+0800
- mixcoord milvus-mixcoord-0 Running Jul 05,2024 17:35 UTC+0800
- querynode milvus-querynode-0 Running Jul 05,2024 17:35 UTC+0800
- datanode milvus-datanode-0 Running Jul 05,2024 17:35 UTC+0800
- proxy milvus-proxy-0 Running Jul 05,2024 17:35 UTC+0800
-
- Resources Allocation:
- COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
- milvus false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- etcd false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- minio false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- proxy false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- mixcoord false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- datanode false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- indexnode false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
- querynode false 1 / 1 1Gi / 1Gi data:20Gi csi-hostpath-sc
-
- Images:
- COMPONENT TYPE IMAGE
- milvus milvus milvusdb/milvus:v2.3.2
- etcd etcd docker.io/milvusdb/etcd:3.5.5-r2
- minio minio docker.io/minio/minio:RELEASE.2022-03-17T06-34-49Z
- proxy proxy milvusdb/milvus:v2.3.2
- mixcoord mixcoord milvusdb/milvus:v2.3.2
- datanode datanode milvusdb/milvus:v2.3.2
- indexnode indexnode milvusdb/milvus:v2.3.2
- querynode querynode milvusdb/milvus:v2.3.2
-
- Show cluster events: kbcli cluster list-events -n demo milvus
- ```
-
-
-
-
-
## 扩缩容
当前,KubeBlocks 支持垂直扩缩容 Milvus 集群。
@@ -911,8 +859,7 @@ mycluster demo milvus-2.3.2 Delete
| **终止策略** | **删除操作** |
|:----------------------|:-------------------------------------------------------------------------------------------|
| `DoNotTerminate` | `DoNotTerminate` 禁止删除操作。 |
-| `Halt` | `Halt` 删除集群资源(如 Pods、Services 等),但保留 PVC。停止其他运维操作的同时,保留了数据。但 `Halt` 策略在 v0.9.1 中已删除,设置为 `Halt` 的效果与 `DoNotTerminate` 相同。 |
-| `Delete` | `Delete` 在 `Halt` 的基础上,删除 PVC 及所有持久数据。 |
+| `Delete` | `Delete` 删除 Pod、服务、PVC 等集群资源,删除所有持久数据。 |
| `WipeOut` | `WipeOut` 删除所有集群资源,包括外部存储中的卷快照和备份。使用该策略将会删除全部数据,特别是在非生产环境,该策略将会带来不可逆的数据丢失。请谨慎使用。 |
执行以下命令查看终止策略。
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md b/i18n/zh-cn/user-docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md
index 6a12e758d21..ca08104dbd0 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-mongodb/cluster-management/create-and-connect-to-a-mongodb-cluster.md
@@ -90,7 +90,7 @@ import TabItem from '@theme/TabItem';
### 创建集群
-KubeBlocks 支持创建两种 MongoDB 集群:单机版(Standalone)和主备版(ReplicaSet)。MongoDB 单机版仅支持一个副本,适用于对可用性要求较低的场景。对于高可用性要求较高的场景,建议创建主备版集群,以支持自动故障切换。为了确保高可用性,所有的副本都默认分布在不同的节点上。
+KubeBlocks 支持创建两种 MongoDB 集群:单机版(Standalone)和主备版(ReplicaSet)。MongoDB 单机版仅支持一个副本,适用于对可用性要求较低的场景。对于高可用性要求较高的场景,建议创建主备版集群,以支持自动故障切换。为了确保高可用性,所有的副本都默认分布在不同的节点上。如果您只有一个节点可用于部署主备版集群,可设置 `spec.schedulingPolicy` 或 `spec.componentSpecs.schedulingPolicy`,具体可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy)。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
@@ -102,53 +102,54 @@ KubeBlocks 支持创建两种 MongoDB 集群:单机版(Standalone)和主
```yaml
cat <
1. 创建 MySQL 集群。
-
+
KubeBlocks 通过 `Cluster` 定义集群。以下是创建 MySQL 主备版的示例。
- 如果您只有一个节点可用于部署集群版,可将 `spec.affinity.topologyKeys` 设置为 `null`。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
+ 如果您只有一个节点可用于部署主备版集群,可设置 `spec.schedulingPolicy` 或 `spec.componentSpecs.schedulingPolicy`,具体可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy)。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
```yaml
cat < - `DoNotTerminate` 会阻止删除操作。
- `Halt` 会删除工作负载资源,如 statefulset 和 deployment 等,但是保留了 PVC 。
- `Delete` 在 `Halt` 的基础上进一步删除了 PVC。
- `WipeOut` 在 `Delete` 的基础上从备份存储的位置完全删除所有卷快照和快照数据。
|
- | `spec.affinity` | 为集群的 Pods 定义了一组节点亲和性调度规则。该字段可控制 Pods 在集群中节点上的分布。 |
- | `spec.affinity.podAntiAffinity` | 定义了不在同一 component 中的 Pods 的反亲和性水平。该字段决定了 Pods 以何种方式跨节点分布,以提升可用性和性能。 |
- | `spec.affinity.topologyKeys` | 用于定义 Pod 反亲和性和 Pod 分布约束的拓扑域的节点标签值。 |
- | `spec.tolerations` | 该字段为数组,用于定义集群中 Pods 的容忍,确保 Pod 可被调度到具有匹配污点的节点上。 |
- | `spec.componentSpecs` | 集群 components 列表,定义了集群 components。该字段允许对集群中的每个 component 进行自定义配置。 |
- | `spec.componentSpecs.componentDefRef` | 表示 cluster definition 中定义的 component definition 的名称,可通过执行 `kubectl get clusterdefinition postgresql -o json \| jq '.spec.componentDefs[].name'` 命令获取 component definition 名称。 |
- | `spec.componentSpecs.name` | 定义了 component 的名称。 |
- | `spec.componentSpecs.disableExporter` | 定义了是否开启监控功能。 |
+ | `spec.terminationPolicy` | 集群终止策略,有效值为 `DoNotTerminate`、`Delete` 和 `WipeOut`。具体定义可参考 [终止策略](./delete-a-postgresql-cluster.md#终止策略)。 |
+ | `spec.clusterDef` | 指定了创建集群时要使用的 ClusterDefinition 的名称。**注意**:**请勿更新此字段**。创建 PostgreSQL 集群时,该值必须为 `postgresql`。 |
+ | `spec.topology` | 指定了在创建集群时要使用的 ClusterTopology 的名称。建议值为 [replication]。 |
+ | `spec.componentSpecs` | 集群 component 列表,定义了集群 components。该字段支持自定义配置集群中每个 component。 |
+ | `spec.componentSpecs.serviceVersion` | 定义了 component 部署的服务版本。有效值为 [12.14.0,12.14.1,12.15.0,14.7.2,14.8.0,15.7.0,16.4.0] |
+ | `spec.componentSpecs.disableExporter` | 定义了是否在 component 无头服务(headless service)上标注指标 exporter 信息,是否开启监控 exporter。有效值为 [true, false]。 |
+ | `spec.componentSpecs.labels` | 指定了要覆盖或添加的标签,这些标签将应用于 component 所拥有的底层 Pod、PVC、账号和 TLS 密钥以及服务。 |
+ | `spec.componentSpecs.labels.apps.kubeblocks.postgres.patroni/scope` | PostgreSQL 的 `ComponentDefinition` 指定了环境变量 `KUBERNETES_SCOPE_LABEL=apps.kubeblocks.postgres.patroni/scope`。该变量定义了 Patroni 用于标记 Kubernetes 资源的标签键,帮助 Patroni 识别哪些资源属于指定的范围(或集群)。**注意**:**不要删除此标签**。该值必须遵循 `-postgresql` 格式。例如,如果你的集群名称是 `mycluster`,则该值应为 `mycluster-postgresql`。可按需将 `mycluster` 替换为你的实际集群名称。 |
| `spec.componentSpecs.replicas` | 定义了 component 中 replicas 的数量。 |
| `spec.componentSpecs.resources` | 定义了 component 的资源要求。 |
+ | `spec.componentSpecs.volumeClaimTemplates` | PersistentVolumeClaim 模板列表,定义 component 的存储需求。 |
+ | `spec.componentSpecs.volumeClaimTemplates.name` | 引用了在 `componentDefinition.spec.runtime.containers[*].volumeMounts` 中定义的 volumeMount 名称。 |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | 定义了 StorageClass 的名称。如果未指定,系统将默认使用带有 `storageclass.kubernetes.io/is-default-class=true` 注释的 StorageClass。 |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | 可按需配置存储容量。 |
+
+ 您可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster),查看更多 API 字段及说明。
KubeBlocks operator 监控 `Cluster` CRD 并创建集群和全部依赖资源。您可执行以下命令获取集群创建的所有资源信息。
@@ -218,10 +214,15 @@ KubeBlocks 支持创建两种 PostgreSQL 集群:单机版(Standalone)和
kbcli cluster create postgresql mycluster --replicas=2 -n demo
```
- 如果您只有一个节点用于部署三节点集群,可在创建集群时将 `topology-keys` 设为 `null`。但需要注意的是,生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
+ 如果您只有一个节点用于部署主备版集群,可在创建集群时配置集群亲和性,配置 `--pod-anti-affinity`, `--tolerations` 和 `--topology-keys`。但需要注意的是,生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。例如,
```bash
- kbcli cluster create postgresql mycluster --replicas=2 --availability-policy='none' -n demo
+ kbcli cluster create postgresql mycluster \
+ --mode='replication' \
+ --pod-anti-affinity='Preferred' \
+ --tolerations='node-role.kubeblocks.io/data-plane:NoSchedule' \
+ --topology-keys='null' \
+ --namespace demo
```
2. 验证集群是否创建成功。
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md
index 0df42bf62d6..b5fc09cc540 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/cluster-management/delete-a-postgresql-cluster.md
@@ -22,8 +22,7 @@ import TabItem from '@theme/TabItem';
| **终止策略** | **删除操作** |
|:----------------------|:-------------------------------------------------------------------------------------------|
| `DoNotTerminate` | `DoNotTerminate` 禁止删除操作。 |
-| `Halt` | `Halt` 删除集群资源(如 Pods、Services 等),但保留 PVC。停止其他运维操作的同时,保留了数据。但 `Halt` 策略在 v0.9.1 中已启用,设置为 `Halt` 的效果与 `DoNotTerminate` 相同。 |
-| `Delete` | `Delete` 在 `Halt` 的基础上,删除 PVC 及所有持久数据。 |
+| `Delete` | `Delete` 删除 Pod、服务、PVC 等集群资源,删除所有持久数据。 |
| `WipeOut` | `WipeOut` 删除所有集群资源,包括外部存储中的卷快照和备份。使用该策略将会删除全部数据,特别是在非生产环境,该策略将会带来不可逆的数据丢失。请谨慎使用。 |
执行以下命令查看终止策略。
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md
index 4044866e0ab..8051a640ad6 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-postgresql/kubeblocks-for-postgresql.md
@@ -7,7 +7,7 @@ sidebar_position: 1
# 用 KubeBlocks 管理 PostgreSQL
-本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 PostgreSQL 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/postgresql)查看 YAML 示例。
+本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 PostgreSQL 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/postgresql)查看 YAML 示例。
* [简介](./introduction/introduction.md)
* [集群管理](./cluster-management/create-and-connect-a-postgresql-cluster.md)
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md
index 42869116574..00ce785f594 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/create-pulsar-cluster-on-kb.md
@@ -79,103 +79,123 @@ KubeBlocks 可以通过良好的抽象快速集成新引擎,并支持 Pulsar
## 创建 Pulsar 集群
-1. 在本地创建 `helm` 使用的 Pulsar 集群模板文件 `values-production.yaml`。
-
- 将以下信息复制到本地文件 `values-production.yaml` 中。
-
- ```bash
- ## 配置 Bookies
- bookies:
- resources:
- limits:
- memory: 8Gi
- requests:
- cpu: 2
- memory: 8Gi
-
- persistence:
- data:
- storageClassName: kb-default-sc
- size: 128Gi
- log:
- storageClassName: kb-default-sc
- size: 64Gi
-
- ## 配置 Zookeeper
- zookeeper:
- resources:
- limits:
- memory: 2Gi
- requests:
- cpu: 1
- memory: 2Gi
-
- persistence:
- data:
- storageClassName: kb-default-sc
- size: 20Gi
- log:
- storageClassName: kb-default-sc
- size: 20Gi
-
- broker:
- replicaCount: 3
- resources:
- limits:
- memory: 8Gi
- requests:
- cpu: 2
- memory: 8Gi
+1. 创建基础模式的 Pulsar 集群。如需创建其他集群模式,您可查看 [GitHub 仓库中的示例](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar)。
+
+ ```yaml
+ cat <,bookies.persistence.log.storageClassName=,zookeeper.persistence.data.storageClassName=,zookeeper.persistence.log.storageClassName= --namespace demo
- ```
-
- 您可以指定存储名称 ``。
-
-3. 验证已创建的集群。
+ | 字段 | 定义 |
+ |---------------------------------------|--------------------------------------|
+ | `spec.terminationPolicy` | 集群终止策略,有效值为 `DoNotTerminate`、`Delete` 和 `WipeOut`。具体定义可参考 [终止策略](./delete-pulsar-cluster.md#终止策略)。 |
+ | `spec.clusterDef` | 指定了创建集群时要使用的 ClusterDefinition 的名称。**注意**:**请勿更新此字段**。创建 Pulsar 集群时,该值必须为 `pulsar`。 |
+ | `spec.topology` | 指定了在创建集群时要使用的 ClusterTopology 的名称。 |
+ | `spec.services` | 定义了集群暴露的额外服务列表。 |
+ | `spec.componentSpecs` | 集群 component 列表,定义了集群 components。该字段支持自定义配置集群中每个 component。 |
+ | `spec.componentSpecs.serviceVersion` | 定义了 component 部署的服务版本。有效值为 [2.11.2,3.0.2]。 |
+ | `spec.componentSpecs.disableExporter` | 定义了是否在 component 无头服务(headless service)上标注指标 exporter 信息,是否开启监控 exporter。有效值为 [true, false]。 |
+ | `spec.componentSpecs.replicas` | 定义了 component 中 replicas 的数量。 |
+ | `spec.componentSpecs.resources` | 定义了 component 的资源要求。 |
+ | `spec.componentSpecs.volumeClaimTemplates` | PersistentVolumeClaim 模板列表,定义 component 的存储需求。 |
+ | `spec.componentSpecs.volumeClaimTemplates.name` | 引用了在 `componentDefinition.spec.runtime.containers[*].volumeMounts` 中定义的 volumeMount 名称。 |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.storageClassName` | 定义了 StorageClass 的名称。如果未指定,系统将默认使用带有 `storageclass.kubernetes.io/is-default-class=true` 注释的 StorageClass。 |
+ | `spec.componentSpecs.volumeClaimTemplates.spec.resources.storage` | 可按需配置存储容量。 |
+
+ 您可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster),查看更多 API 字段及说明。
+
+2. 验证已创建的集群。
```bash
kubectl get cluster mycluster -n demo
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md
index ff2fc29cc34..9c7369aac3a 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/cluster-management/delete-pulsar-cluster.md
@@ -22,8 +22,7 @@ import TabItem from '@theme/TabItem';
| **终止策略** | **删除操作** |
|:----------------------|:-------------------------------------------------------------------------------------------|
| `DoNotTerminate` | `DoNotTerminate` 禁止删除操作。 |
-| `Halt` | `Halt` 删除集群资源(如 Pods、Services 等),但保留 PVC。停止其他运维操作的同时,保留了数据。但 `Halt` 策略在 v0.9.1 中已删除,设置为 `Halt` 的效果与 `DoNotTerminate` 相同。 |
-| `Delete` | `Delete` 在 `Halt` 的基础上,删除 PVC 及所有持久数据。 |
+| `Delete` | `Delete` 删除 Pod、服务、PVC 等集群资源,删除所有持久数据。 |
| `WipeOut` | `WipeOut` 删除所有集群资源,包括外部存储中的卷快照和备份。使用该策略将会删除全部数据,特别是在非生产环境,该策略将会带来不可逆的数据丢失。请谨慎使用。 |
执行以下命令查看终止策略。
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md
index d0c9862f207..d30f0f523ae 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-pulsar/kubeblocks-for-pulsar.md
@@ -7,7 +7,7 @@ sidebar_position: 1
# 用 KubeBlocks 管理 Pulsar
-本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Pulsar 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/pulsar)查看 YAML 示例。
+本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Pulsar 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/pulsar)查看 YAML 示例。
* [集群管理](./cluster-management/create-pulsar-cluster-on-kb.md)
* [配置](./configuration/configuration.md)
diff --git a/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md b/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md
index bcfce4fec33..7967361c017 100644
--- a/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md
+++ b/i18n/zh-cn/user-docs/kubeblocks-for-qdrant/manage-qdrant.md
@@ -15,7 +15,7 @@ import TabItem from '@theme/TabItem';
Qdrant(读作:quadrant)是向量相似性搜索引擎和向量数据库。它提供了生产可用的服务和便捷的 API,用于存储、搜索和管理点(即带有额外负载的向量)。Qdrant 专门针对扩展过滤功能进行了优化,使其在各种神经网络或基于语义的匹配、分面搜索以及其他应用中充分发挥作用。
-目前,KubeBlocks 支持 Qdrant 的管理和运维。本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Qdrant 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/release-0.9/examples/qdrant)查看 YAML 示例。
+目前,KubeBlocks 支持 Qdrant 的管理和运维。本文档展示了如何通过 kbcli、kubectl 或 YAML 文件等当时创建和管理 Qdrant 集群。您可以在 [GitHub 仓库](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/qdrant)查看 YAML 示例。
## 开始之前
@@ -38,73 +38,58 @@ Qdrant(读作:quadrant)是向量相似性搜索引擎和向量数据库。
-KubeBlocks 通过 `Cluster` 定义集群。以下是创建 Qdrant 集群的示例。Pod 默认分布在不同节点。但如果您只有一个节点可用于部署集群,可将 `spec.affinity.topologyKeys` 设置为 `null`。
-
-:::note
-
-生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
-
-:::
+KubeBlocks 通过 `Cluster` 定义集群。以下是创建 Qdrant 集群的示例。Pod 默认分布在不同节点。如果您只有一个节点可用于部署多副本集群,可设置 `spec.schedulingPolicy` 或 `spec.componentSpecs.schedulingPolicy`,具体可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy)。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
```yaml
cat <
@@ -113,65 +113,73 @@ KubeBlocks 支持创建两种 Redis 集群:单机版(Standalone)和主备
```yaml
cat <
-
-
-
-KubeBlocks 通过 `Cluster` 定义集群。以下是创建 StarRocks 集群的示例。Pod 默认分布在不同节点。但如果您只有一个节点可用于部署集群,可将 `spec.affinity.topologyKeys` 设置为 `null`。
-
-:::note
-
-生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
-
-:::
+KubeBlocks 通过 `Cluster` 定义集群。以下是创建 StarRocks 集群的示例。Pod 默认分布在不同节点。如果您只有一个节点可用于部署多副本集群,可设置 `spec.schedulingPolicy` 或 `spec.componentSpecs.schedulingPolicy`,具体可参考 [API 文档](https://kubeblocks.io/docs/preview/developer_docs/api-reference/cluster#apps.kubeblocks.io/v1.SchedulingPolicy)。但生产环境中,不建议将所有副本部署在同一个节点上,因为这可能会降低集群的可用性。
```yaml
cat <
-
-
-
-1. 执行以下命令,创建 StarRocks 集群。
-
- ```bash
- kbcli cluster create mycluster --cluster-definition=starrocks -n demo
- ```
-
- 如果您需要自定义集群规格,kbcli 也提供了诸多参数,如支持设置引擎版本、终止策略、CPU、内存规格。您可通过在命令结尾添加 --help 或 -h 来查看具体说明。比如,
-
- ```bash
- kbcli cluster create --help
- kbcli cluster create -h
- ```
-
-2. 验证集群是否创建成功。
-
- ```bash
- kbcli cluster list -n demo
- >
- NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
- mycluster demo starrocks starrocks-3.1.1 Delete Running Jul 17,2024 19:06 UTC+0800
- ```
-
-3. 查看集群信息。
-
- ```bash
- kbcli cluster describe mycluster -n demo
- >
- Name: mycluster Created Time: Jul 17,2024 19:06 UTC+0800
- NAMESPACE CLUSTER-DEFINITION VERSION STATUS TERMINATION-POLICY
- demo starrocks starrocks-3.1.1 Running Delete
-
- Endpoints:
- COMPONENT MODE INTERNAL EXTERNAL
- fe ReadWrite mycluster-fe.default.svc.cluster.local:9030
-
- Topology:
- COMPONENT INSTANCE ROLE STATUS AZ NODE CREATED-TIME
- be mycluster-be-0 Running minikube/192.168.49.2 Jul 17,2024 19:06 UTC+0800
- fe mycluster-fe-0 Running minikube/192.168.49.2 Jul 17,2024 19:06 UTC+0800
-
- Resources Allocation:
- COMPONENT DEDICATED CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
- fe false 1 / 1 1Gi / 1Gi data:20Gi standard
- be false 1 / 1 1Gi / 1Gi data:20Gi standard
-
- Images:
- COMPONENT TYPE IMAGE
- fe fe docker.io/starrocks/fe-ubuntu:2.5.4
- be be docker.io/starrocks/be-ubuntu:2.5.4
-
- Show cluster events: kbcli cluster list-events -n demo mycluster
- ```
-
-
-
-
-
## 扩缩容
### 垂直扩缩容
@@ -945,8 +860,7 @@ KubeBlocks 支持重启集群中的所有 Pod。当数据库出现异常时,
| **终止策略** | **删除操作** |
|:----------------------|:-------------------------------------------------------------------------------------------|
| `DoNotTerminate` | `DoNotTerminate` 禁止删除操作。 |
-| `Halt` | `Halt` 删除集群资源(如 Pods、Services 等),但保留 PVC。停止其他运维操作的同时,保留了数据。但 `Halt` 策略在 v0.9.1 中已删除,设置为 `Halt` 的效果与 `DoNotTerminate` 相同。 |
-| `Delete` | `Delete` 在 `Halt` 的基础上,删除 PVC 及所有持久数据。 |
+| `Delete` | `Delete` 删除 Pod、服务、PVC 等集群资源,删除所有持久数据。 |
| `WipeOut` | `WipeOut` 删除所有集群资源,包括外部存储中的卷快照和备份。使用该策略将会删除全部数据,特别是在非生产环境,该策略将会带来不可逆的数据丢失。请谨慎使用。 |
执行以下命令查看终止策略。
diff --git a/i18n/zh-cn/user-docs/observability/monitor-database.md b/i18n/zh-cn/user-docs/observability/monitor-database.md
index b1db27cc57b..c1058cb9e82 100644
--- a/i18n/zh-cn/user-docs/observability/monitor-database.md
+++ b/i18n/zh-cn/user-docs/observability/monitor-database.md
@@ -113,47 +113,40 @@ import TabItem from '@theme/TabItem';
```yaml
cat <
+
+
+
+ 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/apecloud-mysql/pod-monitor.yaml)中查看最新版本示例 YAML 文件。
```yaml
kubectl apply -f - <
+
+
+
+ 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/mysql/pod-monitor.yaml)中查看最新版本示例 YAML 文件。
+
+ ```yaml
+ kubectl apply -f - <
+
+
+
+ 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/postgresql/pod-monitor.yml)中查看最新版本示例 YAML 文件。
+
+ ```yaml
+ kubectl apply -f - <
+
+
+
+ 您也可以在 [KubeBlocks Addons 仓库](https://github.com/apecloud/kubeblocks-addons/blob/main/examples/redis/pod-monitor.yaml)中查看最新版本示例 YAML 文件。
+
+ ```yaml
+ kubectl apply -f - <
+
+
+
3. 连接 Grafana 大盘.
登录 Grafana 大盘,并导入大盘。