目 录
[隐藏]

RBD全称RADOS block device,是Ceph对外提供的块设备服务。

RADOS全称Reliable Autonomic Distributed Object Store

1.验证 Linux内核版本

##
#  Verify that you have an appropriate version of the Linux kernel. 
#  See OS Recommendations for details.

yum install -y redhat-lsb

lsb_release -a
uname -r
[root@ec-k8s-m1 ceph-rbd]# lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.5.1804 (Core) 
Release:	7.5.1804
Codename:	Core

[root@ec-k8s-m1 ceph-rbd]# uname -r
3.10.0-862.6.3.el7.x86_64

2.安装 ceph-client

Ceph -client节点,不要与Ceph存储集群节点在相同的物理节点上(除非使用VM)。

##
#  在admin node节点,配置ceph-client host。
#
[root@ec-k8s-n1 my-cluster]# vim /etc/hosts
# client-ip
172.16.0.52  ceph-client
##
#  On the admin node, use ceph-deploy to install Ceph on your ceph-client node.

ceph-deploy install ceph-client

3.Create a Pool

##
#  ceph osd pool create {pool-name} {pg-num}
#  recommend the name ‘rbd’
#  On the admin node

[root@ec-k8s-n1 my-cluster]# ceph osd pool create rbd 32
pool 'rbd' created

[root@ec-k8s-n1 my-cluster]# ceph osd lspools 
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
5 rbd

4.Initialize the Pool

##
#  On the admin node, use the rbd tool to initialize the pool for use by RBD:

[root@ec-k8s-n1 my-cluster]# rbd pool init rbd

5.Configure a Block Device

##
#  On the admin node, copy the configuration file and admin key to ceph-client node

[root@ec-k8s-n1 ceph]# ceph-deploy admin ec-k8s-m2
##
#  On the ceph-client node, create a block device image
#  rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
#  rbd create foo --size 4G --image-feature layering

[root@ec-k8s-m2 ceph]# rbd create foo --size 4096 --image-feature layering -m ec-k8s-n1 -k /etc/ceph/ceph.client.admin.keyring -p rbd

[root@ec-k8s-m2 ceph]# rbd ls
foo

6.Map the Image

##
#  On the ceph-client node, map the image to a block device.
#   rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]

[root@ec-k8s-m2 ceph]# rbd map foo --name client.admin
/dev/rbd0

##
#  
[root@ec-k8s-m2 ceph]# lsblk 
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                         8:0    0   58G  0 disk 
├─sda1                      8:1    0    1G  0 part /boot
└─sda2                      8:2    0   57G  0 part 
  ├─centos-root           253:0    0   37G  0 lvm  /
  ├─centos-swap           253:1    0    2G  0 lvm  
  └─centos-home           253:2    0   18G  0 lvm  /home
sdb                         8:16   0   28G  0 disk 
└─sdb1                      8:17   0   28G  0 part 
sr0                        11:0    1 1024M  0 rom  
rbd0                      252:0    0    4G  0 disk 

7.Creating a File System

##
#  Use the block device by creating a file system on the ceph-client node.
#  mkfs.ext4 -m0 /dev/rbd0

[root@ec-k8s-m2 ceph]# mkfs.ext4 -m0 /dev/rbd/rbd/foo
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
262144 inodes, 1048576 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 
##
#  Mount the file system on the ceph-client node.

[root@ec-k8s-m2 ceph]# mkdir /mnt/ceph-block-device

# mount /dev/rbd0 /mnt/ceph-block-device
[root@ec-k8s-m2 ceph]# mount /dev/rbd/rbd/foo /mnt/ceph-block-device
##
#  
[root@ec-k8s-m2 ceph-block-device]# df -h
......
/dev/rbd0                3.9G   16M  3.8G   1% /mnt/ceph-block-device
##
#  
[root@ec-k8s-m2 mnt]# umount /mnt/ceph-block-device/

8.Removing a Block Device Image

##
#  直接移除。To remove a block device, execute the following, but replace {image-name} with the name of the image you want to remove:
rbd rm {image-name}
rbd rm {pool-name}/{image-name}

#  For example:
rbd rm foo

## 
#  移除到回收站。To defer delete a block device from a pool, execute the following, but replace {image-name} with the name of the image to move and replace {pool-name} with the name of the pool:

rbd trash mv {pool-name}/{image-name}

##
#  从回收站移除。To remove a deferred block device from a pool, execute the following, but replace {image-id} with the id of the image to remove and replace {pool-name} with the name of the pool:

rbd trash rm {pool-name}/{image-id}

##
#  从回收站恢复。To restore a deferred delete block device in the rbd pool, execute the following, but replace {image-id} with the id of the image, replace {pool-name} with the name of the pool:
rbd trash ls
rbd trash restore {image-id}
rbd trash restore {pool-name}/{image-id}


9.Delete a Pool

##
#  To delete a pool, execute:

ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]


参考资料:

http://docs.ceph.com/docs/master/start/quick-rbd/
http://docs.ceph.com/docs/master/rbd/rados-rbd-cmds/#removing-a-block-device-image