-
Disable passive node
pcs cluster standby node2
-
Disable serviced pcs resource
pcs resource disable serviced
On bootup, will need to confirm pcs resources started fine,
pcs status
. Once confirmed, manually start serviced,systemctl start serviced && journalctl -fu serviced
.Note: Ensure you stop serviced,
systemctl stop serviced
before stoping any pcs configured resource or before any server reboot or shutdown.
-
Stop serviced
- stop resmgr:
serviced service stop Zenoss.resmgr
- resource hosts:
systemctl stop serviced
- master:
pcs resource disable serviced
- zk ensemble hosts:
systemctl stop serviced
- stop resmgr:
-
Note VirtualIP:
pcs resource show VirtualIP
-
Note disk layout, paying specific attention to serviced thinpool & isvcs.
lsblk
-
Stop/Disable HA services:
pcs resource disable serviced-group # Wait for pcs status to show everything as stopped systemctl stop pcsd; systemctl stop cotosync; systemctl stop pacemaker systemctl disable pcsd; systemctl disable cotosync; systemctl disable pacemaker
-
Disable drbdm
-
move isvcs data to temporary location
/opt/serviced/var/isvcs/.keys
directory ?cp -r /opt/serviced/var/isvcs /root/isvcs-tmp umount /opt/serviced/var/isvcs
-
proceed with disabling drbd
drbdadm down all mv /etc/drbd.d/serviced-dfs.res ~/ vi /etc/lvm/lvm.conf # comment out: filter = ["r|/dev/sdd|"] # originally that line was a commented example of: filter = [ "a|.*/|" ]
-
-
Recreate Storage:
-
serviced thinpool: NOTE: replace
/dev/sde
with the appropriate device noted abovewipefs -a /dev/sde serviced-storage create-thin-pool serviced /dev/sde:
-
/opt/serviced/var/isvcs
volume NOTE: replace/dev/sdd
with the appropriate device noted above.wipefs -a /dev/sdd mkfs.xfs /dev/sdd vi /etc/fstab # make changes to the isvcs entry; UUID not the same, etc.
-
-
Re-Bind virtual IP:
-
Temporary, not persistent through boot
ip address add 10.60.61.62/24 dev eth0
-
Whatever long term methoodology used by client's supporting infra team Internal Ticker ?
-
-
Start enable:
- zk ensemble hosts:
systemctl start serviced && journalctl -fu serviced
- master:
systemctl start serviced && journalctl -fu serviced
WAIT: serviced to start on master, verify working well on master & ZK ensemble nodes - master:
systemctl enable serviced
- resource hosts:
systemctl start serviced && journalctl -fu serviced
WAIT: verify nodes are working and show as up CC web ui systemctl start serviced && journalctl -fu serviced - Start ResourceManager:`serviced service start Zenoss.resmgr
- zk ensemble hosts:
-
Restore backup.
Paul Fielding: Here's what I would do to get of of DRBD storage (I've done this before when completely rebuilding someone's DRBD Storage, but same would apply to get off of it). I'd still take a backup first, just to be safe, but should be pretty safe if you're careful.
-
Follow same procedure outlined in Option#2, except Step 6. "Recreate Storage"
-
Migrate Storage:
-
serviced volume
-
get a temporary swing volume or new volume data will be migrated to at least as big as original serviced volume - can be reclaimed after
-
add swing volume to serviced volume group
-
Identify HA/DRBD volume
$ lsblk -p --output=NAME,SIZE,TYPE NAME SIZE TYPE /dev/sde 90G disk └─/dev/drbd2 90G disk ├─/dev/mapper/serviced-serviced--pool_tmeta 96M lvm │ └─/dev/mapper/serviced-serviced--pool 80.9G lvm │ └─/dev/mapper/docker-147:1-67-2Op2dvqhGfA6gb6El3QxV9 45G dm └─/dev/mapper/serviced-serviced--pool_tdata 80.9G lvm └─/dev/mapper/serviced-serviced--pool 80.9G lvm └─/dev/mapper/docker-147:1-67-2Op2dvqhGfA6gb6El3QxV9 45G dm
-
Identify new volume
$ lsblk -p --output=NAME,SIZE,TYPE NAME SIZE TYPE /dev/sdf 95G disk
-
Create LVM physical Volume;
pvcreate /dev/sdf
-
Extend the serviced lvm volume group;
vgextend serviced /dev/sdf
-
-
pvmigrate the extents to the swing volume. takes time & very IO intensive
pvmove /dev/drbd2 /dev/sdf
-
remove drbd volume from volume group;
vgreduce serviced /dev/drbd2
-
remove that drbd volume from drbd config: /etc/drbd.d/serviced-dfs.res
------------ DONE if only migrating data to a new volume. You can disconnect the old storage from the server NOTE: rebooting server may change the device names of the storage. ------------
- rebuild the volume without drbd - ? -
wipefs -a /dev/sde
- add back into serviced volume group - ? -
pvcreate /dev/sde
- pvmigrate back to original device - ? -
pvmove /dev/sdf /dev/sde
- remove swing volume from volume group - ? -
vgreduce serviced /dev/sde
- give back swing volume - ? - disconnect storage
-
-
isvcs volume A short outage will be needed:
-
stop Zenoss.resmgr & serviced
-
tar out /opt/serviced/var/isvcs
-
blow away and recreate them without drbd NOTE: replace
/dev/sdd
with the appropriate devicewipefs -a /dev/sdd mkfs.xfs /dev/sdd vi /etc/fstab # make changes to the isvcs entry; UUID not the same, etc. mount -a
-
tar back in isvcs and var/volumes
-
bring serviced back up.
-
Reference:
- Zenoss -> Resource Manager -> System Administration & Configuration
-