2015年12月10日 星期四

RHEL 的 OpenStack (七)--管理 Cinder 區塊儲存服務

Cinder 區塊儲存服務

  1. OpenStack 內的區塊儲存服務由四個子服務所構成!
    • API 服務(openstack-cinder-api)
    • 排程服務(openstack-cinder-scheduler)
    • 卷冊服務(openstack-cinder-volume)
    • 備份服務(openstack-cinder-backup)

  2. 佈署區塊服務方式由三個狀態所組成:
    • 安裝前準備
    • 一般安裝設定
    • 卷冊服務規格設定

  3. 區塊服務支援的儲存方式:
    • LVM/iSCSI
    • ThinLVM
    • NFS
    • NetAPP NFS
    • Red Hat Storage (Gluster)
    • Dell EqualLogic

  4. openstack-cinder-volume:可將 volume group 直接附加到正在執行中的 Server!

  5. Cinder 服務開啟後,即可使用 cinder 指令管理 Cinder 卷冊!

練習1:
  1. # grep CINDER /root/answers.txt
    CONFIG_CINDER_INSTALL=y
    CONFIG_CINDER_DB_PW=xxxx
    CONFIG_CINDER_KD_PW=xxxx
    CONFIG_CINDER_BACKEND=lvm
    CONFIG_CINDER_VOLUMES_CREATE=n
    CONFIG_CINDER_VOLIMES_SIZE=20G
    ...
  2. # ps axf | grep cinder
  3. # crudini --get /etc/cinder/cinder.conf DEFAULT osapi_volume_workers
  4. # grep listen_port /etc/cinder/cinder.conf
  5. # ss -nlp | egrep 8776
  6. # ls /etc/cinder/
  7. # vim /etc/cinder/cinder.conf
    rabbit_host=10.1.1.1
    rabbit_port=5672
    rabbit_hosts=10.1.1.1:5672
    rabbit_use_ssl=False
    ====== 為了配合 Glance 運作 ========
    glance_host=10.1.1.1
    ====== 啟動支援 LVM 儲存方式 =======
    [lvm]
    iscsi_helper=lioadm
    volume_group=cinder-volumes
    iscsi_ip_address=10.1.1.1
    volume_driver=cider.volume.drivers.lvm.LVMISCSIDriver
    volume_backend_name=lvm
  8. # vim /etc/cinder/api-paste.ini
    [filter:authtoken]
    paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    admin_tenant_name=services
    service_port=5000
    auth_uri=http://10.1.1.1:5000/
    service_host=10.1.1.1
    service_protocol=http
    admin_user=cinder
    identity_uri=http://10.1.1.1:35357
    admin_password=xxxxx
  9. # source /root/keystonerc_admin
  10. # keystone service-get cinder
  11. # keystone catalog --service volume
  12. # keystone user-get cinder
  13. # cinder list
管理 Cinder 卷冊
  1. 管理 Cinder 卷冊的工作:
    • 備份與移植
    • 針對卷冊形式,建立加密形式
    • 找尋被驗證服務註冊的結束點
    • 調整已存在卷冊大小

  2. 所需要使用的指令:
    • # cinder --help
    • # vgdisplay
    • # lvdisplay
練習2:
  1. # openstack-status
  2. # source /root/keystonerc_myuser
  3. # cinder create --display-name vol1 2
  4. # cinder list
  5. # vgs
  6. # cinder show vol1
  7. # grep backup_driver /etc/cinder/cinder.conf
    backup_driver=cinder.backup.drivers.swift
  8. # systemctl start openstack-cinder-backup.service
  9. # systemctl enable openstack-cinder-backup.service
  10. 建使用 myuser 建立備份之前,需要付予 SwiftOperator 角色
    • # source /root/keystonerc_admin
    • # keystone user-role-add --role SwiftOperator --user myuser --tenant myopenstack
    • # source /root/keystonerc_myuser
  11. # cinder backup-create vol1 --display-name vol1-backup
  12. # cinder backup-list
  13. # swift list
  14. # swift list volumebackups
  15. 刪除方式:
    • cinder delete vol1
    • cinder backup-delete ID (不是 Volume ID)
使用 Ceph 做為 Cinder 資料後端

Cinder 預設後端是 LVM ,可改成其他的儲存服務,例如: Ceph

練習3:
  1. 登入 Ceph 主機
  2. # ceph osd pool create volumes 128
  3. # ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes'
  4. 登入 Cinder 主機
  5. # yum -y install ceph-common
  6. # vim /etc/ceph/ceph.client.cinder.keyring  ==> 內容貼上第 3 步驟產生的內容
  7. # chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
  8. 把 Ceph 主機上的 /etc/ceph/ceph.conf 複製到 Cinder 主機上的 /etc/ceph/ceph.conf
  9. # vim /root/client.cinder.key ==> 內容貼上第 3 步驟產生的「key」內容
    如果忘記,可在 Ceph 主機上,執行 # ceph auth get-key client.cinder
  10. # uuidgen
  11. # vim /root/secret.xml
    <secret ephemeral='no' private='no'>
      <uuid>由上一步驟產生的 UUID 值!</uuid>
        <usage type='ceph'>
          <name>client.cinder secret</name>
        </usage>
    </secret>
  12. # virsh secret-define --file /root/secret.xml
  13. # virsh secret-set-value --secret 上一步驟的 UUID 值! --base64 $(cat /root/client.cinder.key)
  14. # cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig
  15. # vim /etc/cinder/cinder.conf  ==> 增加下列資料:
    enabled_backends = rbd, lvm
    [rbd]
    volume_driver = cinder.volume.drivers.rbd.RBDDriver
    rbd_pool = volumes
    rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_flatten_volume_from_sanpshot = false
    rbd_max_clone_depth = 5
    rbd_store_chunk_size = 4
    rados_connect_timeout = -1
    glance_api_version = 2
    rbd_user = cinder
    rbd_secret_uuid = 上一步驟的 UUID 值!
  16. vim /etc/nova/nova.conf
    rbd_user=cinder
    rbd_secret_uuid=上一步驟的 UUID 值!
  17. # openstack-service restart nova
  18. # openstack-service restart cinder
  19. # source /root/keystonerc_admin
  20. # cinder create --display-name cephvol 1
  21. # cinder list
  22. 在 Ceph 主機上,確認一下卷冊內容:# rados -p volumes ls
  23. 刪除方式:  # cinder delete cephvol

以 Gluster 做為 Cinder 資料後端
GlusterFS 是 Red Hat Storage 的檔案系統,可用來做為 Cinder 的資料後端!
想要開啟這項功能,需要調整 /etc/cinder/cinder.conf 檔案內容!
  1. # yum install -y glusterfs-fuse
  2. # vim /etc/cinder/cinder.conf
    [DEFAULT]
    enabled_backends=lvm1,glusterfs1
    [lvm1]
    volume_group=cinder-volumes
    volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
    volume_backend_name=lvm
    [glusterfs1]
    volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
    glusterfs_shares_config=/etc/cinder/shares.conf
    volume_backend_name=rhs
    [nfs1]
    volume_driver=cinder.volume.drivers.nfs.NfsDriver
    nfs_shares_config=/etc/cinder/nfsshares.conf
    volume_backend_name=nfs
  3. # vim /etc/cinder/shares.conf ==> 手動建立
    storage-server:/volumeX
    storage-server:/volumeY
  4. # source /root/keystonerc_admin
  5. # cinder type-create lvm  ==> 使用多個 Backend ,就需要宣告、建立合適的卷冊形式
  6. # cinder type-key lvm set volume_backend_name=lvm
  7. # cinder type-create glusterfs
  8. # cinder type-key glusterfs set volume_backend_name=rhs
  9. # cinder type-create nfs
  10. # cinder type-key nfs set volume_backend_name=NFS
  11. # openstack-service restart cinder
  12. # tail -f /var/log/cinder/volume.log
  13. # cinder create --display-name glustertest --volume-type glusterfs 1
  14. # cinder list
練習3:

  1. # yum install -y glusterfs-fuse
  2. # cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig2
  3. # ping -c 3 rhs
  4. # showmount -e rhs
  5. # glusterfs -s rhs --volfile-id=/volume0 /mnt/
  6. # df
  7. # umount /mnt
  8. # crudini --set /etc/cinder/cinder.conf DEFAULT enabled_backups glusterfs,lvm
  9. # crudini --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
  10. # crudini --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
  11. # crudini --set /etc/cinder/cinder.conf lvm volume_backend_name LVM
  12. # crudini --set /etc/cinder/cinder.conf glusterfs volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
  13. # crudini --set /etc/cinder/cinder.conf glusterfs glusterfs_shares_config /etc/cinder/shares.conf
  14. # crudini --set /etc/cinder/cinder.conf glusterfs glusterfs_sparsed_volumes false
  15. # crudini --set /etc/cinder/cinder.conf glusterfs volume_backend_name RHS
  16. # vim /etc/cinder/shares.conf
    rhs:/volume0
  17. # openstack-service restart cinder
  18. # tail /var/log/cinder/volume.log
  19. # df
  20. # source /root/keystonerc_admin
  21. # cinder type-create lvm
  22. # cinder type-key lvm set volume_backend_name=LVM
  23. # cinder type-create glusterfs
  24. # cinder type-key glusterfs set volume_backend_name=RHS
  25. # cinder type-list
  26. # cinder create --volume-type lvm --display-name vol2 1
  27. # cinder list
  28. # cinder create --volume-type glusterfs --display-name vol3 1
  29. # cinder list
  30. # df
  31. # cinder create --volume-type lvm --display-name vol4 1
  32. # cinder list
  33. 清除方式:
    # cinder delete vol2
    # cinder delete vol3
    # cinder delete vol4
  34. # cinder type-list
  35. 清除方式
    # cinder type-list
    # cinder type-delete lvm
    # cinder type-delete glusterfs
    # cp /etc/cinder/cinder.conf.orig2 /etc/cinder/cinder.conf # chown cinder:cinder /etc/cinder/cinder.conf
  36. # openstack-service restart cinder

除錯
練習4: