2017年3月22日 星期三

在 CentOS7/RHEL7 上,學習架設 High-Availability 服務(四)

學習目標:
  • 在 clusterX 上,管理 fence 平台與控制節點運作!
  • 實作上,以 fence-virtd 為練習方式,控制 hypervior 上的VM!
Fence Device Server 安裝流程:
  1. 在實體主機上,安裝 fence device server 套件:
    [root@fence ~]# yum -y install fence-virtd fence-virtd-libvrit fence-virtd-multicast
    
  2. 在實體主機上,產生 fence_xvm.key:
    [root@fence ~]# mkdir /etc/cluster
    [root@fence ~]# cd /etc/cluster
    [root@fence ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
    
  3. 將 fence_xvm.key 複製到各VM上:
    [root@fence ~]# ssh nodea
    [root@nodea ~]# mkdir /etc/cluster ; exit
    [root@fence ~]# scp /etc/cluster/fence_xvm.key nodea:/etc/cluster
    
  4. 利用指令設定 fence_virtd 設定檔 /etc/fence_virt.conf:
    [root@fence ~]# fence_virtd -c
    (注意下列設定,其餘保持預設值:)
    interface: [br0] (與虚擬機通訊的網路卡介面)
    key path: [/etc/cluster/fence_xvm.key]
    
  5. 檢查檔案 /etc/fence_virt.conf 內容:
    [root@fence ~]# vim /etc/fence_virt.conf
    backends {
            libvirt {
                    uri = "qemu:///system";
            }
    
    }
    
    listeners {
            multicast {
                    port = "1229";
                    family = "ipv4";
                    interface = "br0";
                    address = "225.0.0.12";
                    key_file = "/etc/cluster/fence_xvm.key";
            }
    
    }
    
    fence_virtd {
            module_path = "/usr/lib64/fence-virt";
            backend = "libvirt";
            listener = "multicast";
    }
    
    
  6. 啟動 fence-virtd 服務:
    [root@fence ~]# systemctl enable fence_virtd
    [root@fence ~]# systemctl start fence_virtd
    
  7. 開啟防火牆與SELinux設定:
    [root@fence ~]# firewall-cmd --permanent --add-port=1229/tcp
    [root@fence ~]#firewall-cmd --permanent --add-port=1229/udp
    [root@fence ~]#firewall-cmd --reload
    [root@fence ~]#setsebool -P fenced_can_network_connect on
    [root@fence ~]#setsebool -P fenced_can_ssh on
    
每個節點(VM node)安裝流程:
  1. 安裝 fence-virt 套件:
    [root@nodea ~]# yum -y install fence-virt fence-agents-all
    
  2. 開啟防火牆與SELinux設定:
    [root@nodea ~]# firewall-cmd --permanent --add-port=1229/tcp
    [root@nodea ~]# firewall-cmd --permanent --add-port=1229/tcp
    [root@nodea ~]# firewall-cmd --reload
    [root@nodea ~]# setsebool -P setsebool -P fenced_can_network_connect on
    [root@nodea ~]# setsebool -P fenced_can_ssh on
    
  3. 測試是否可以查看其它節點運作情況:(其它節點也需做完相同步驟):
    [root@nodea ~]# fence_xvm -o list
    [root@nodea ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H guest2 -o status
    
  4. 測試是否可以控制其它節點運作:
    [root@nodea ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H guest2 -o reboot
    
練習節點(VM node)的 fence device 資源管理:
  1. 建立可管理節點的 fence 資源:
    [root@nodea ~]# pcs stonith create fence_nodea fence_xvm \
    > port="guest1" \
    > pcmk_host_check="static-list" \
    > pcmk_host_list="nodea.example.com nodeb.example.com nodec.example.com noded.example.com"
    
  2. 測試運作情形:
    [root@nodea ~]# pcs stonith fence noded.example.com
    
  3. 更新設定內容:
    [root@nodea ~]# pcs stonith update fence_nodes \
    > pcmk_host_check="none" \
    > pcmk_host_list=""
    
  4. 測試運作情形:
    [root@nodea ~]# pcs stonith fence noded.example.com
    [root@nodea ~]# corosync-quorumtool -m