- OpenStack 內的區塊儲存服務由四個子服務所構成!
- API 服務(openstack-cinder-api)
- 排程服務(openstack-cinder-scheduler)
- 卷冊服務(openstack-cinder-volume)
- 備份服務(openstack-cinder-backup)
- 佈署區塊服務方式由三個狀態所組成:
- 安裝前準備
- 一般安裝設定
- 卷冊服務規格設定
- 區塊服務支援的儲存方式:
- LVM/iSCSI
- ThinLVM
- NFS
- NetAPP NFS
- Red Hat Storage (Gluster)
- Dell EqualLogic
- openstack-cinder-volume:可將 volume group 直接附加到正在執行中的 Server!
- Cinder 服務開啟後,即可使用 cinder 指令管理 Cinder 卷冊!
練習1:
- # grep CINDER /root/answers.txt
CONFIG_CINDER_INSTALL=y
CONFIG_CINDER_DB_PW=xxxx
CONFIG_CINDER_KD_PW=xxxx
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=n
CONFIG_CINDER_VOLIMES_SIZE=20G
... - # ps axf | grep cinder
- # crudini --get /etc/cinder/cinder.conf DEFAULT osapi_volume_workers
- # grep listen_port /etc/cinder/cinder.conf
- # ss -nlp | egrep 8776
- # ls /etc/cinder/
- # vim /etc/cinder/cinder.conf
rabbit_host=10.1.1.1
rabbit_port=5672
rabbit_hosts=10.1.1.1:5672
rabbit_use_ssl=False
====== 為了配合 Glance 運作 ========
glance_host=10.1.1.1
====== 啟動支援 LVM 儲存方式 =======
[lvm]
iscsi_helper=lioadm
volume_group=cinder-volumes
iscsi_ip_address=10.1.1.1
volume_driver=cider.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=lvm
- # vim /etc/cinder/api-paste.ini
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
admin_tenant_name=services
service_port=5000
auth_uri=http://10.1.1.1:5000/
service_host=10.1.1.1
service_protocol=http
admin_user=cinder
identity_uri=http://10.1.1.1:35357
admin_password=xxxxx
- # source /root/keystonerc_admin
- # keystone service-get cinder
- # keystone catalog --service volume
- # keystone user-get cinder
- # cinder list
- 管理 Cinder 卷冊的工作:
- 備份與移植
- 針對卷冊形式,建立加密形式
- 找尋被驗證服務註冊的結束點
- 調整已存在卷冊大小
- 所需要使用的指令:
- # cinder --help
- # vgdisplay
- # lvdisplay
- # openstack-status
- # source /root/keystonerc_myuser
- # cinder create --display-name vol1 2
- # cinder list
- # vgs
- # cinder show vol1
- # grep backup_driver /etc/cinder/cinder.conf
backup_driver=cinder.backup.drivers.swift - # systemctl start openstack-cinder-backup.service
- # systemctl enable openstack-cinder-backup.service
- 建使用 myuser 建立備份之前,需要付予 SwiftOperator 角色
- # source /root/keystonerc_admin
- # keystone user-role-add --role SwiftOperator --user myuser --tenant myopenstack
- # source /root/keystonerc_myuser
- # cinder backup-create vol1 --display-name vol1-backup
- # cinder backup-list
- # swift list
- # swift list volumebackups
- 刪除方式:
- cinder delete vol1
- cinder backup-delete ID (不是 Volume ID)
Cinder 預設後端是 LVM ,可改成其他的儲存服務,例如: Ceph
練習3:
- 登入 Ceph 主機
- # ceph osd pool create volumes 128
- # ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes'
- 登入 Cinder 主機
- # yum -y install ceph-common
- # vim /etc/ceph/ceph.client.cinder.keyring ==> 內容貼上第 3 步驟產生的內容
- # chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
- 把 Ceph 主機上的 /etc/ceph/ceph.conf 複製到 Cinder 主機上的 /etc/ceph/ceph.conf
- # vim /root/client.cinder.key ==> 內容貼上第 3 步驟產生的「key」內容
如果忘記,可在 Ceph 主機上,執行 # ceph auth get-key client.cinder - # uuidgen
- # vim /root/secret.xml
<secret ephemeral='no' private='no'>
<uuid>由上一步驟產生的 UUID 值!</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret> - # virsh secret-define --file /root/secret.xml
- # virsh secret-set-value --secret 上一步驟的 UUID 值! --base64 $(cat /root/client.cinder.key)
- # cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig
- # vim /etc/cinder/cinder.conf ==> 增加下列資料:
enabled_backends = rbd, lvm
[rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_sanpshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 上一步驟的 UUID 值!
- vim /etc/nova/nova.conf
rbd_user=cinder
rbd_secret_uuid=上一步驟的 UUID 值!
- # openstack-service restart nova
- # openstack-service restart cinder
- # source /root/keystonerc_admin
- # cinder create --display-name cephvol 1
- # cinder list
- 在 Ceph 主機上,確認一下卷冊內容:# rados -p volumes ls
- 刪除方式:
# cinder delete cephvol
以 Gluster 做為 Cinder 資料後端
GlusterFS 是 Red Hat Storage 的檔案系統,可用來做為 Cinder 的資料後端!
想要開啟這項功能,需要調整 /etc/cinder/cinder.conf 檔案內容!
- # yum install -y glusterfs-fuse
- # vim /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends=lvm1,glusterfs1
[lvm1]
volume_group=cinder-volumes
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=lvm
[glusterfs1]
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config=/etc/cinder/shares.conf
volume_backend_name=rhs
[nfs1]
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfsshares.conf
volume_backend_name=nfs
- # vim /etc/cinder/shares.conf ==> 手動建立
storage-server:/volumeX
storage-server:/volumeY
- # source /root/keystonerc_admin
- # cinder type-create lvm ==> 使用多個 Backend ,就需要宣告、建立合適的卷冊形式
- # cinder type-key lvm set volume_backend_name=lvm
- # cinder type-create glusterfs
- # cinder type-key glusterfs set volume_backend_name=rhs
- # cinder type-create nfs
- # cinder type-key nfs set volume_backend_name=NFS
- # openstack-service restart cinder
- # tail -f /var/log/cinder/volume.log
- # cinder create --display-name glustertest --volume-type glusterfs 1
- # cinder list
- # yum install -y glusterfs-fuse
- # cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.orig2
- # ping -c 3 rhs
- # showmount -e rhs
- # glusterfs -s rhs --volfile-id=/volume0 /mnt/
- # df
- # umount /mnt
- # crudini --set /etc/cinder/cinder.conf DEFAULT enabled_backups glusterfs,lvm
- # crudini --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
- # crudini --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
- # crudini --set /etc/cinder/cinder.conf lvm volume_backend_name LVM
- # crudini --set /etc/cinder/cinder.conf glusterfs volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
- # crudini --set /etc/cinder/cinder.conf glusterfs glusterfs_shares_config /etc/cinder/shares.conf
- # crudini --set /etc/cinder/cinder.conf glusterfs glusterfs_sparsed_volumes false
- # crudini --set /etc/cinder/cinder.conf glusterfs volume_backend_name RHS
- # vim /etc/cinder/shares.conf
rhs:/volume0 - # openstack-service restart cinder
- # tail /var/log/cinder/volume.log
- # df
- # source /root/keystonerc_admin
- # cinder type-create lvm
- # cinder type-key lvm set volume_backend_name=LVM
- # cinder type-create glusterfs
- # cinder type-key glusterfs set volume_backend_name=RHS
- # cinder type-list
- # cinder create --volume-type lvm --display-name vol2 1
- # cinder list
- # cinder create --volume-type glusterfs --display-name vol3 1
- # cinder list
- # df
- # cinder create --volume-type lvm --display-name vol4 1
- # cinder list
- 清除方式:
# cinder delete vol2
# cinder delete vol3
# cinder delete vol4 - # cinder type-list
- 清除方式
# cinder type-list
# cinder type-delete lvm
# cinder type-delete glusterfs
# cp /etc/cinder/cinder.conf.orig2 /etc/cinder/cinder.conf # chown cinder:cinder /etc/cinder/cinder.conf - # openstack-service restart cinder
除錯
練習4: