Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thêm cách cài cho bản Octopus #3

Open
wants to merge 44 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
1718740
Update README.md
luk3k0 Mar 12, 2020
20b3dde
Create off-on-cluster.md
luk3k0 Mar 12, 2020
a48ec80
Update off-on-cluster.md
luk3k0 Mar 12, 2020
b4a13cd
Update off-on-cluster.md
luk3k0 Mar 13, 2020
49697e2
Update ceph-nautilus.md
luk3k0 Mar 13, 2020
ef44bdf
Update off-on-cluster.md
luk3k0 Mar 14, 2020
fbf8bad
Update off-on-cluster.md
luk3k0 Mar 14, 2020
e9dbb2a
Update off-on-cluster.md
luk3k0 Mar 14, 2020
ffc8e0d
Update ceph-nautilus.md
luk3k0 Mar 14, 2020
957e128
Rename ceph-nautilus.md to ceph-nautilus-centos.md
luk3k0 Mar 14, 2020
d554a36
Create ceph-nautilus.md
luk3k0 Mar 14, 2020
9edff22
Update ceph-nautilus.md
luk3k0 Mar 14, 2020
8c4e6f2
Create ceph-nautilus-ubuntu.md
luk3k0 Mar 14, 2020
fbb1c1d
Update README.md
luk3k0 Mar 15, 2020
839ddcf
Create add-osd.md
luk3k0 Mar 15, 2020
8b531ba
Update add-osd.md
luk3k0 Mar 15, 2020
3eaa15d
Update README.md
luk3k0 Mar 15, 2020
1a86674
Create del-osd.md
luk3k0 Mar 15, 2020
0db9618
Update del-osd.md
luk3k0 Mar 15, 2020
aff1fcf
Update ceph-nautilus-ubuntu.md
luk3k0 Mar 16, 2020
5d942c5
Update ceph-nautilus-ubuntu.md
luk3k0 Mar 16, 2020
75c2942
Create enable-rgw.md
luk3k0 Mar 16, 2020
27ad6fc
Update enable-rgw.md
luk3k0 Mar 16, 2020
b687524
Create bucket-err.md
luk3k0 Mar 16, 2020
74fe17d
Update README.md
luk3k0 Mar 16, 2020
1a6f51a
Update note.md
luk3k0 Mar 16, 2020
d1ff492
Update ceph-nautilus-ubuntu.md
luk3k0 Mar 17, 2020
aacaba0
Update ceph-nautilus-ubuntu.md
luk3k0 Mar 17, 2020
fdb95a4
Update ceph-nautilus-ubuntu.md
luk3k0 Mar 17, 2020
37b9d15
Update README.md
luk3k0 Mar 17, 2020
d0238d7
Create enable-rgw.md
luk3k0 Mar 17, 2020
4c5d272
Update ceph-nautilus-ubuntu.md
luk3k0 Mar 19, 2020
66fb4da
Update ceph-nautilus-ubuntu.md
luk3k0 Mar 31, 2020
4fce6f2
Create ceph-octopus.md
luk3k0 Mar 31, 2020
ca03e1a
Update ceph-octopus.md
luk3k0 Apr 1, 2020
0f99c79
Update ceph-octopus.md
luk3k0 Apr 1, 2020
9c62ccd
Add files via upload
luk3k0 Apr 1, 2020
4630bab
Update ceph-octopus.md
luk3k0 Apr 1, 2020
9b3523a
Update README.md
luk3k0 Apr 1, 2020
7cacd09
Update ceph-octopus.md
luk3k0 Apr 1, 2020
c4798c4
Update enable-rgw.md
luk3k0 Apr 1, 2020
9721a17
Update enable-rgw.md
luk3k0 Apr 1, 2020
81ed95a
Update enable-rgw.md
luk3k0 Apr 1, 2020
9f2926b
Update off-on-cluster.md
luk3k0 Apr 1, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 24 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@

[Cài đặt Ceph bản Nautilus](docs/setup/ceph-nautilus.md)

[Cài đặt Ceph bản Octopus](docs/setup/ceph-octopus.md)

[Cài đặt Ceph-RadosGW HA bản Nautilus](docs/setup/ceph-radosgw.md)

# Tài liệu tích hợp
Expand All @@ -50,6 +52,25 @@

# Tài liệu vận hành


## Thêm osd

[Thêm osd](docs/operating/add-osd.md)

## Cập nhật osd

## Xoá osd

[Xoá osd](docs/operating/del-osd.md)

## Bật rgw

[Bật rgw](docs/operating/enable-rgw.md)

## Tắt/bật cluster

[Tắt/bật cluster](docs/operating/off-on-cluster.md)

## CheatSheet thao tác

[Ceph Cheat sheet](docs/operating/ceph-cheat-sheet.md)
Expand All @@ -60,6 +81,8 @@

# Benchmark & Troubleshooting

- [Lỗi không tạo bucket](docs/operating/bucket-err.md)

- [Node Ceph hỏng](docs/operating/ceph-hardware-crash.md)

- [Note case vận hành](docs/operating/note.md)
- [Note case vận hành](docs/operating/note.md)
63 changes: 63 additions & 0 deletions docs/operating/add-osd.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
## Thêm mới osd cho cluster ceph đã cài đặt trên ubuntu 18.04

## Tạo user "cephuser" trên ceph4
```sh
sudo useradd -m -s /bin/bash cephuser
passwd cephuser
echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
sudo chmod 0440 /etc/sudoers.d/cephuser
sudo sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
```
## Lệnh bên dưới đều thao tác bằng user "cephuser", trên node ceph1
- Thêm node mới (ceph4) vào file hosts:
```
/etc/hots
```
- File quản lý clush:
```
/etc/clustershell/groups
```
## Copy id từ ceph1 lên ceph4
```sh
ssh-copy-id ceph4
```
## Cài đặt python, ntp
```sh
ssh ceph4 "sudo apt install python ntp -y; timedatectl"
```
## Thay đổi hostname (nếu cần)
```sh
ssh ceph4 "hostnamectl set-hostname ceph4"
```
## Cài cmdlog
ssh ceph4 "curl -Lso- https://raw.githubusercontent.com/nhanhoadocs/scripts/master/Utilities/cmdlog.sh | sudo bash"

## Add key, resource
```sh
ssh ceph4 "wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - ; sudo apt update -y"
ssh ceph4 "echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list"
```

## Cài ceph lên node ceph4
```sh
ceph-deploy install --release nautilus ceph4

```
## Kiểm tra, so sánh, phiên bản
```sh
clush -a "ceph --version"
```

## Chỉ định, tạo osd trên ceph4
```sh
ceph-deploy disk zap ceph4 /dev/vdb
ceph-deploy disk zap ceph4 /dev/vdc
ceph-deploy osd create --data /dev/vdb ceph4
ceph-deploy osd create --data /dev/vdc ceph4
```

## Kiểm tra lại
```sh
sudo ceph --status
sudo ceph osd tree
```
20 changes: 20 additions & 0 deletions docs/operating/bucket-err.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
```
ceph version 14.2.8 (2d095e947a02261ce61424021bb43bd3022d35cb) nautilus (stable)
```
## Khi tạo bucket mới có hiện lỗi


```
500 - Internal Server Error
RGW REST API failed request with status code 416 '{"Code":"InvalidRange","BucketName":"1111111111","RequestId":"tx000000000000000000126-005e6f7c7e-fc18-default","HostId":"fc18-default-default"}'
```
## Check lỗi trong "/var/log/ceph/ceph-client.rgw.ceph1.log" có xuất hiện:

```
rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
```
## - Nguyên nhân: có thể do quá nhiều PGs trên OSD.
## - Giải quyết: điều chỉnh lại PGs, xoá bớt pool ko cần thiết

Links tham khảo:
https://ceph-users.ceph.narkive.com/zwPBOjFr/luminous-rgw-errors-at-start
21 changes: 21 additions & 0 deletions docs/operating/del-osd.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
## Kiểm tra trạng thái cluster

```sh
ceph osd tree
```
## Thực hiện gỡ osd (ví dụ osd có id là 6, trên ceph4)
```sh
sudo ceph osd out osd.6
ssh ceph4 "sudo systemctl stop ceph-osd@6"
ssh ceph4 "sudo umount /var/lib/ceph/osd/ceph-6"
sudo ceph osd crush remove osd.6
sudo ceph auth del osd.6
sudo ceph osd rm osd.6
ceph-deploy purge ceph4
```



## Links tham khảo
https://www.virtualtothecore.com/adventures-with-ceph-storage-part-7-add-a-node-and-expand-the-cluster-storage/
https://medium.com/@george.shuklin/how-to-remove-osd-from-ceph-cluster-b4c37cc0ec87
43 changes: 43 additions & 0 deletions docs/operating/enable-rgw.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
## Bật Object Gateway Management(OGM).

Để dùng được OGM chúng ta cần cung cấp thông tài khoản có cờ "system".

Cú pháp tạo user với cờ "system"

```sh
sudo radosgw-admin user create --uid=<user_id> --display-name=<display_name> --system
```
Lưu lại access_key và secret_key.

Command sử dụng trong trường hợp không nhớ access_key và secret_key của use:

```sh
sudo radosgw-admin user info --uid=<user_id>
```

Command gán quyền truy xuất dashboard cho user:

```sh
sudo ceph dashboard set-rgw-api-access-key <access_key>
sudo ceph dashboard set-rgw-api-secret-key <secret_key>
```

Nếu sử dụng chứng chỉ tự ký, có thể sẽ gặp một số lỗi về chứng chỉ, vậy chúng ta sẽ tắt xác nhận ssl của rgw:

```sh
sudo ceph dashboard set-rgw-api-ssl-verify False
```
Tắt/bật lại dashboard

```sh
sudo ceph mgr module disable dashboard
sudo ceph mgr module enable dashboard
```
Khởi tạo node rgw trên cả 3 node ceph

```sh
cd ceph-deploy
ceph-deploy install --rgw ceph01 ceph02 ceph03
ceph-deploy rgw create ceph01 ceph02 ceph03
```
Ceph Object Gateway được chạy trên Civetweb (được tích hợp sẵn trong ceph-radosgw daemon) bao gồm Apache và FastCGI. Civetweb của RGW chạy dưới port 7480
9 changes: 8 additions & 1 deletion docs/operating/note.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
# Note các case vận hành Ceph

[Sử dụng cached WALL & DB trên SSD cho OSD](bluestore-blockwall.md)
[Sử dụng cached WALL & DB trên SSD cho OSD](bluestore-blockwall.md)

# Bật, tắt mon_allow_pool_delete
```
ceph config set mon mon_allow_pool_delete true
```
Links tham khảo:
https://stackoverflow.com/a/58750208/4435643
49 changes: 49 additions & 0 deletions docs/operating/off-on-cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
## Hướng dẫn cách tắt/bật cluster


# Tắt (không thực hiện bước tiếp, nếu các bước trước không thoả mãn).
1. Dừng sử dụng RBD images/Rados Gateway trên tất cả clients.
2. Hãy chắc chắn cluster đang ở trạng thái "healthy".
3. Bật cờ noout, norecover, norebalance, nobackfill, nodown and pause.

```sh
##Chạy trên ceph01 (mng node)

ceph osd set noout
ceph osd set norecover
ceph osd set norebalance
ceph osd set nobackfill
ceph osd set nodown
ceph osd set pause
```
> Status hiển thị
```
OSDMAP_FLAGS: pauserd,pausewr,nodown,noout,nobackfill,norebalance,norecover flag(s) set
```
4. Shutdown lần lượt osd node.
5. Shutdown lần lượt monitor node.
6. Shutdown admin node.


# Bật

1. Bật nguồn admin node.
2. Bật nguồn monitor node.
3. Bật nguồn osd node.
4. Đợi tới khi tất cả các node được bật, kiểm tra kết nối thành công giữa các node.
5. Tắt cờ noout,norecover,noreblance, nobackfill, nodown and pause.

```sh
##Run on ceph1(mng node)
ceph osd unset noout
ceph osd unset norecover
ceph osd unset norebalance
ceph osd unset nobackfill
ceph osd unset nodown
ceph osd unset pause
```

6. Kiểm tra trạng thái của cluster và kết nối lại từ các client.

# Links tham khảo:
https://ceph.io/planet/how-to-do-a-ceph-cluster-maintenance-shutdown/
Loading