Skip to content

Latest commit

 

History

History
156 lines (114 loc) · 6.7 KB

removeHA.md

File metadata and controls

156 lines (114 loc) · 6.7 KB

Some Options:


1. Just disable PCS from starting serviced

  1. Disable passive node

    pcs cluster standby node2
    
  2. Disable serviced pcs resource

    pcs resource disable serviced
    

    On bootup, will need to confirm pcs resources started fine, pcs status. Once confirmed, manually start serviced, systemctl start serviced && journalctl -fu serviced.

    Note: Ensure you stop serviced, systemctl stop serviced before stoping any pcs configured resource or before any server reboot or shutdown.


2. Remove HA: recreate storage, restore from backup

  1. Stop serviced

    1. stop resmgr: serviced service stop Zenoss.resmgr
    2. resource hosts: systemctl stop serviced
    3. master: pcs resource disable serviced
    4. zk ensemble hosts: systemctl stop serviced
  2. Note VirtualIP: pcs resource show VirtualIP

  3. Note disk layout, paying specific attention to serviced thinpool & isvcs. lsblk

  4. Stop/Disable HA services:

     pcs resource disable serviced-group
     # Wait for pcs status to show everything as stopped
     systemctl stop pcsd; systemctl stop cotosync; systemctl stop pacemaker
     systemctl disable pcsd; systemctl disable cotosync; systemctl disable pacemaker
    
  5. Disable drbdm

    1. move isvcs data to temporary location /opt/serviced/var/isvcs/.keys directory ?

      cp -r /opt/serviced/var/isvcs /root/isvcs-tmp
      umount /opt/serviced/var/isvcs
      
    2. proceed with disabling drbd

      drbdadm down all
      mv /etc/drbd.d/serviced-dfs.res ~/
      vi /etc/lvm/lvm.conf 
      # comment out: filter = ["r|/dev/sdd|"]
      # originally that line was a commented example of: filter = [ "a|.*/|" ] 
      
  6. Recreate Storage:

    1. serviced thinpool: NOTE: replace /dev/sde with the appropriate device noted above

      wipefs -a /dev/sde
      serviced-storage create-thin-pool serviced /dev/sde:
      
    2. /opt/serviced/var/isvcs volume NOTE: replace /dev/sdd with the appropriate device noted above.

      wipefs -a /dev/sdd
      mkfs.xfs /dev/sdd
      vi /etc/fstab  # make changes to the isvcs entry; UUID not the same, etc.
      
  7. Re-Bind virtual IP:

    1. Temporary, not persistent through boot

      ip address add 10.60.61.62/24 dev eth0
      
    2. Whatever long term methoodology used by client's supporting infra team Internal Ticker ?

  8. Start enable:

    1. zk ensemble hosts: systemctl start serviced && journalctl -fu serviced
    2. master: systemctl start serviced && journalctl -fu serviced WAIT: serviced to start on master, verify working well on master & ZK ensemble nodes
    3. master: systemctl enable serviced
    4. resource hosts: systemctl start serviced && journalctl -fu serviced WAIT: verify nodes are working and show as up CC web ui systemctl start serviced && journalctl -fu serviced
    5. Start ResourceManager:`serviced service start Zenoss.resmgr
  9. Restore backup.


3. Migrate storage to nonDRBD volumes

Paul Fielding: Here's what I would do to get of of DRBD storage (I've done this before when completely rebuilding someone's DRBD Storage, but same would apply to get off of it). I'd still take a backup first, just to be safe, but should be pretty safe if you're careful.

  1. Follow same procedure outlined in Option#2, except Step 6. "Recreate Storage"

  2. Migrate Storage:

    1. serviced volume

      • get a temporary swing volume or new volume data will be migrated to at least as big as original serviced volume - can be reclaimed after

      • add swing volume to serviced volume group

        1. Identify HA/DRBD volume

          $ lsblk -p --output=NAME,SIZE,TYPE
          NAME                                                        SIZE TYPE
          /dev/sde                                                     90G disk
          └─/dev/drbd2                                                 90G disk
            ├─/dev/mapper/serviced-serviced--pool_tmeta                96M lvm
            │ └─/dev/mapper/serviced-serviced--pool                  80.9G lvm
            │   └─/dev/mapper/docker-147:1-67-2Op2dvqhGfA6gb6El3QxV9   45G dm
            └─/dev/mapper/serviced-serviced--pool_tdata              80.9G lvm
              └─/dev/mapper/serviced-serviced--pool                  80.9G lvm
                └─/dev/mapper/docker-147:1-67-2Op2dvqhGfA6gb6El3QxV9   45G dm
          
        2. Identify new volume

          $ lsblk -p --output=NAME,SIZE,TYPE
          NAME     SIZE TYPE
          /dev/sdf  95G disk
          
        3. Create LVM physical Volume; pvcreate /dev/sdf

        4. Extend the serviced lvm volume group; vgextend serviced /dev/sdf

      • pvmigrate the extents to the swing volume. takes time & very IO intensive

        pvmove /dev/drbd2 /dev/sdf
        
      • remove drbd volume from volume group; vgreduce serviced /dev/drbd2

      • remove that drbd volume from drbd config: /etc/drbd.d/serviced-dfs.res

      ------------ DONE if only migrating data to a new volume. You can disconnect the old storage from the server NOTE: rebooting server may change the device names of the storage. ------------

      • rebuild the volume without drbd - ? - wipefs -a /dev/sde
      • add back into serviced volume group - ? - pvcreate /dev/sde
      • pvmigrate back to original device - ? - pvmove /dev/sdf /dev/sde
      • remove swing volume from volume group - ? - vgreduce serviced /dev/sde
      • give back swing volume - ? - disconnect storage
    2. isvcs volume A short outage will be needed:

      • stop Zenoss.resmgr & serviced

      • tar out /opt/serviced/var/isvcs

      • blow away and recreate them without drbd NOTE: replace /dev/sdd with the appropriate device

        wipefs -a /dev/sdd
        mkfs.xfs /dev/sdd
        vi /etc/fstab  # make changes to the isvcs entry; UUID not the same, etc.
        mount -a
        
      • tar back in isvcs and var/volumes

      • bring serviced back up.

    Reference: