Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
yang1666204 committed May 11, 2024
2 parents dd9ef6e + 5628f47 commit 78359d0
Show file tree
Hide file tree
Showing 27 changed files with 1,297 additions and 287 deletions.
43 changes: 30 additions & 13 deletions README-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,21 +18,21 @@ ob-operator 依赖 [cert-manager](https://cert-manager.io/docs/), cert-manager
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.2.0_release/deploy/cert-manager.yaml
```

本例子中的 OceanBase 集群存储依赖 [local-path-provisioner](https://github.com/rancher/local-path-provisioner) 提供, 需要提前进行安装并确保其存储目的地有足够大的磁盘空间。
本例子中的 OceanBase 集群存储依赖 [local-path-provisioner](https://github.com/rancher/local-path-provisioner) 提供, 需要提前进行安装并确保其存储目的地有足够大的磁盘空间。如果您计划在生产环境部署,推荐使用其他的存储解决方案。我们在[存储兼容性](#存储兼容性)一节提供了我们测试过的存储兼容性结果。

### 部署 ob-operator

#### 使用 YAML 配置文件

通过以下命令即可在 K8s 集群中部署 ob-operator:

* 稳定版本
- 稳定版本

```shell
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.2.0_release/deploy/operator.yaml
```

* 开发版本
- 开发版本

```shell
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/operator.yaml
Expand All @@ -53,20 +53,23 @@ helm install ob-operator ob-operator/ob-operator --namespace=oceanbase-system --
部署所需要的文件放在项目的 `deploy/terraform` 目录

1. 生成配置变量:
在开始部署前,需要通过以下命令来生成 `terraform.tfvars` 文件,用来记录当前 Kubernetes 集群的一些配置。
在开始部署前,需要通过以下命令来生成 `terraform.tfvars` 文件,用来记录当前 Kubernetes 集群的一些配置。

```shell
cd deploy/terraform
./generate_k8s_cluster_tfvars.sh
```

2. 初始化 Terraform:
此步骤用来保证 terraform 获取到必要的 plugin 和模块来管理配置的资源,使用如下命令来进行初始化。
此步骤用来保证 terraform 获取到必要的 plugin 和模块来管理配置的资源,使用如下命令来进行初始化。

```
terraform init
```

3. 应用配置:
执行以下命令开始部署 ob-operator。
执行以下命令开始部署 ob-operator。

```
terraform apply
```
Expand Down Expand Up @@ -102,7 +105,7 @@ kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.2.0_r
```shell
kubectl get obclusters.oceanbase.oceanbase.com test

# desired output
# desired output
NAME STATUS AGE
test running 6m2s
```
Expand Down Expand Up @@ -136,13 +139,17 @@ helm install oceanbase-dashboard ob-operator/oceanbase-dashboard
![oceanbase-dashboard-install](./docsite/static/img/oceanbase-dashboard-install.jpg)

OceanBase Dashboard 成功安装之后, 会自动创建一个 admin 用户和随机密码,可以通过如下命令查看密码。

```
echo $(kubectl get -n default secret oceanbase-dashboard-user-credentials -o jsonpath='{.data.admin}' | base64 -d)
```

一个 NodePort 类型的 service 会默认创建,可以通过如下命令查看 service 的地址,然后在浏览器中打开。

```
kubectl get svc oceanbase-dashboard-oceanbase-dashboard
```

![oceanbase-dashboard-service](./docsite/static/img/oceanbase-dashboard-service.jpg)

使用 admin 账号和查看到的密码登录。
Expand All @@ -166,12 +173,22 @@ ob-operator 支持 OceanBase 集群的管理、租户管理、备份恢复、故
- [x] 备份恢复:向 OSS 或 NFS 目的地周期性备份数据、从 OSS 或 NFS 中恢复数据
- [x] 物理备库:从备份中恢复出备租户、创建空备租户、备租户升主、主备切换
- [x] 故障恢复:单节点故障恢复,IP 保持情况下的集群故障恢复
- [x] Dashboard(GUI):基于 ob-operator 的图形化 OceanBase 集群管理工具

即将支持的功能有:
## 存储兼容性

- [ ] Dashboard:基于 ob-operator 的图形化 OceanBase 集群管理工具
- [ ] 丰富的运维任务资源:包括但不限于针对集群和租户的轻量任务
我们测试了如下的存储方案,兼容性结果如表格所示:

| 存储方案 | 测试版本 | 是否兼容 | 说明 |
| ---------------------- | -------- | -------- | ---------------------------------- |
| local-path-provisioner | 0.0.23 || 建议开发和测试环境使用 |
| Rook CephFS | v1.6.7 || CephFS 不支持 `fallocate` 系统调用 |
| Rook RBD (Block) | v1.6.7 || |
| OpenEBS (cStor) | v3.6.0 || |
| GlusterFS | v1.2.0 || 要求机器内核版本不低于 5.14 |
| Longhorn | v1.6.0 || |
| JuiceFS | v1.1.2 || |
| NFS | v5.5.0 || NFS 协议 >= 4.2 时能启动集群,但无法回收租户资源 |

## 支持的 OceanBase 版本

Expand All @@ -183,9 +200,9 @@ ob-operator 支持 OceanBase v4.x 版本。某些特性需要特定的 OceanBase

ob-operator 使用 [kubebuilder](https://book.kubebuilder.io/introduction) 项目进行构建,所以开发和运行环境与其相近。

* 构建 ob-operator 需要 Go 1.20 版本及以上;
* 运行 ob-operator 需要 Kubernetes 集群和 kubectl 的版本在 1.18 及以上。我们在 1.23 ~ 1.25 版本的 K8s 集群上检验过 ob-operator 的运行是符合预期的。
* 如果使用 Docker 作为集群的容器运行时,需要 Docker 17.03 及以上版本;我们的构建和运行环境使用的 Docker 版本为 18。
- 构建 ob-operator 需要 Go 1.20 版本及以上;
- 运行 ob-operator 需要 Kubernetes 集群和 kubectl 的版本在 1.18 及以上。我们在 1.23 ~ 1.28 版本的 K8s 集群上检验过 ob-operator 的运行是符合预期的。
- 如果使用 Docker 作为集群的容器运行时,需要 Docker 17.03 及以上版本;我们的构建和运行环境使用的 Docker 版本为 18。

## 文档

Expand Down
51 changes: 35 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,21 +19,21 @@ If you have trouble accessing `quay.io` image registry, our mirrored cert-manage
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.2.0_release/deploy/cert-manager.yaml
```

Storage of OceanBase cluster in this example relies on [local-path-provisioner](https://github.com/rancher/local-path-provisioner), which should be installed beforehand. You should confirm that there is enough disk space in storage destination of local-path-provisioner.
Storage of OceanBase cluster in this example relies on [local-path-provisioner](https://github.com/rancher/local-path-provisioner), which should be installed beforehand. You should confirm that there is enough disk space in storage destination of local-path-provisioner. If you decide to deploy OceanBase cluster in production environment, it is recommended to use other storage solutions. We have provided a compatible table for storage solutions that we tested in section [Storage Compatibility](#storage-compatibility).

### Deploy ob-operator

#### Using YAML configuration file

You can deploy ob-operator in a Kubernetes cluster by executing the following command:

* Stable
- Stable

```shell
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/2.2.0_release/deploy/operator.yaml
```

* Development
- Development

```shell
kubectl apply -f https://raw.githubusercontent.com/oceanbase/ob-operator/master/deploy/operator.yaml
Expand All @@ -54,20 +54,23 @@ helm install ob-operator ob-operator/ob-operator --namespace=oceanbase-system --
The required configuration files are conveniently located within the `deploy/terraform` directory of our repository.

1. Generate Configuration Variables:
To begin, you'll need to generate a `terraform.tfvars` file, which will hold the configuration specifics of your Kubernetes cluster. Use the following commands to create this file.
To begin, you'll need to generate a `terraform.tfvars` file, which will hold the configuration specifics of your Kubernetes cluster. Use the following commands to create this file.

```shell
cd deploy/terraform
./generate_k8s_cluster_tfvars.sh
```

2. Initialize Terraform:
This step will ensure that Terraform has all the necessary plugins and modules to manage the resources. Use the following command to initialize the terraform environment.
This step will ensure that Terraform has all the necessary plugins and modules to manage the resources. Use the following command to initialize the terraform environment.

```
terraform init
```

3. Apply Configuration:
The final step is to deploy ob-operator. Execute the following command and Terraform will begin the deployment process
The final step is to deploy ob-operator. Execute the following command and Terraform will begin the deployment process

```
terraform apply
```
Expand All @@ -79,7 +82,7 @@ After deployment/installation is complete, you can use the following command to
```shell
kubectl get pod -n oceanbase-system

# desired output
# desired output
NAME READY STATUS RESTARTS AGE
oceanbase-controller-manager-86cfc8f7bf-4hfnj 2/2 Running 0 1m
```
Expand All @@ -103,7 +106,7 @@ It generally takes around 2 minutes to bootstrap a cluster. Execute the followin
```shell
kubectl get obclusters.oceanbase.oceanbase.com test

# desired output
# desired output
NAME STATUS AGE
test running 6m2s
```
Expand All @@ -123,9 +126,11 @@ mysql -h{POD_IP} -P2881 -uroot -proot_password oceanbase -A -c
```

### OceanBase Dashboard

We are excited to unveil our innovative OceanBase Kubernetes Dashboard, a pioneering tool designed to enhance your experience with managing and monitoring OceanBase clusters on Kubernetes. We are proud to offer this amazing tool to our users and will actively work on new features and enhancements for future updates.

Deploy OceanBase Dashboard is pretty simple, just run the following commands

```
helm repo add ob-operator https://oceanbase.github.io/ob-operator/
helm repo update ob-operator
Expand All @@ -135,13 +140,17 @@ helm install oceanbase-dashboard ob-operator/oceanbase-dashboard
![oceanbase-dashboard-install](./docsite/static/img/oceanbase-dashboard-install.jpg)

After OceanBase Dashboard is successfully installed, a default user admin is created with a random password, you can check the password using the command printed after installation.

```
echo $(kubectl get -n default secret oceanbase-dashboard-user-credentials -o jsonpath='{.data.admin}' | base64 -d)
```

A service of type NodePort is created by default, you can check the address and port and open it in browser

```
kubectl get svc oceanbase-dashboard-oceanbase-dashboard
```

![oceanbase-dashboard-service](./docsite/static/img/oceanbase-dashboard-service.jpg)

Login with admin user and password
Expand All @@ -150,13 +159,12 @@ Login with admin user and password

## Project Architecture

ob-operator is built on top of kubebuilder and provides control and management of OceanBase clusters and related applications through a unified resource manager interface, a global task manager instance, and a task flow mechanism for handling long-running tasks. The architecture diagram is approximately as follows:
ob-operator is built on top of kubebuilder and provides control and management of OceanBase clusters and related applications through a unified resource manager interface, a global task manager instance, and a task flow mechanism for handling long-running tasks. The architecture diagram is approximately as follows:

![ob-operator Architecture](./docsite/static/img/ob-operator-arch.png)

For more detailed information about the architecture, please refer to the [Architecture Document](https://oceanbase.github.io/ob-operator/docs/developer/arch).


## Features

It provides various functionalities for managing OceanBase clusters, tenants, backup and recovery, and fault recovery. Specifically, ob-operator supports the following features:
Expand All @@ -166,11 +174,22 @@ It provides various functionalities for managing OceanBase clusters, tenants, ba
- [x] Backup and Recovery: Periodically backup data to OSS or NFS destinations, restore data from OSS or NFS.
- [x] Physical Standby: Restore standby tenant from backup, create empty standby tenant, activate standby tenant to primary, primary-standby switchover.
- [x] Fault Recovery: Single node fault recovery, cluster-wide fault recovery with IP preservation.
- [x] Dashboard(GUI): A web-based graphical management tool for OceanBase clusters based on ob-operator.

## Storage Compatibility

The upcoming features include:
We have tested ob-operator with the following storage solutions:

- [ ] Dashboard: A web-based graphical management tool for OceanBase clusters based on ob-operator.
- [ ] Enhanced operational task resources: This includes lightweight tasks focused on cluster and tenant management, among other features.
| Storage Solution | Tested Version | Compatibility | Notes |
| ---------------------- | -------------- | ------------- | -------------------------------------------- |
| local-path-provisioner | 0.0.23 || Recommended for development and testing |
| Rook CephFS | v1.6.7 || CephFS does not support `fallocate` sys call |
| Rook RBD (Block) | v1.6.7 || |
| OpenEBS (cStor) | v3.6.0 || |
| GlusterFS | v1.2.0 || Requires kernel version >= 5.14 |
| Longhorn | v1.6.0 || |
| JuiceFS | v1.1.2 || |
| NFS | v5.5.0 || Bootstrap with NFS protocol >= 4.2, but can not recycle tenant resource. |

## Supported OceanBase Versions

Expand All @@ -182,9 +201,9 @@ OceanBase v3.x versions are currently not supported by ob-operator.

ob-operator is built using the [kubebuilder](https://book.kubebuilder.io/introduction) project, so the development and runtime environment are similar to it.

* To build ob-operator: Go version 1.20 or higher is required.
* To run ob-operator: Kubernetes cluster and kubectl version 1.18 or higher are required. We examined the functionalities on k8s cluster of version from 1.23 ~ 1.25 and ob-operator performs well.
* If using Docker as the container runtime for the cluster, Docker version 17.03 or higher is required. We tested building and running ob-operator with Docker 18.
- To build ob-operator: Go version 1.20 or higher is required.
- To run ob-operator: Kubernetes cluster and kubectl version 1.18 or higher are required. We examined the functionalities on k8s cluster of version from 1.23 ~ 1.28 and ob-operator performs well.
- If using Docker as the container runtime for the cluster, Docker version 17.03 or higher is required. We tested building and running ob-operator with Docker 18.

## Documents

Expand Down
Loading

0 comments on commit 78359d0

Please sign in to comment.