目 录
[隐藏]

        Ceph是一个可靠地、可扩展的、自动重均衡、自动恢复的分布式存储系统,根据场景划分可以将Ceph分为三大块,分别是对象存储RADOSGW(Reliable、Autonomic、Distributed、Object Storage Gateway)、块设备存储RBD(Rados Block Device)和文件系统服务Ceph FS(Ceph Filesystem)。它可以将多台服务器组成一个超大集群,把这些机器中的磁盘资源整合到一块儿,形成一个大的资源池(PB级别),然后按需分配给应用使用。

1.Ceph特点

  • 高性能
    a. 摒弃了传统的集中式存储元数据寻址的方案,采用CRUSH算法,数据分布均衡,并行度高。
    b.考虑了容灾域的隔离,能够实现各类负载的副本放置规则,例如跨机房、机架感知等。
    c. 能够支持上千个存储节点的规模,支持TB到PB级的数据。
  • 高可用性
    a. 副本数可以灵活控制。
    b. 支持故障域分隔,数据强一致性。
    c. 多种故障场景自动进行修复自愈。
    d. 没有单点故障,自动管理。
  • 高可扩展性
    a. 去中心化。
    b. 扩展灵活。
    c. 随着节点增加而线性增长。
  • 特性丰富
    a. 支持三种存储接口:块存储、文件存储、对象存储。
    b. 支持自定义接口,支持多种语言驱动。

2.Ceph核心组件及概念介绍

  • Monitor
    一个Ceph集群需要多个Monitor组成的小集群,它们通过Paxos同步数据,用来保存OSD的元数据。

  • OSD
    OSD全称Object Storage Device,也就是负责响应客户端请求返回具体数据的进程。一个Ceph集群一般都有很多个OSD。

  • MDS
    MDS全称Ceph Metadata Server,是CephFS服务依赖的元数据服务。

  • Object
    Ceph最底层的存储单元是Object对象,每个Object包含元数据和原始数据。

  • PG
    PG全称Placement Grouops,是一个逻辑的概念,一个PG包含多个OSD。引入PG这一层其实是为了更好的分配数据和定位数据。

  • RADOS
    RADOS全称Reliable Autonomic Distributed Object Store,是Ceph集群的精华,用户实现数据分配、Failover等集群操作。

  • Libradio
    Librados是Rados提供库,因为RADOS是协议很难直接访问,因此上层的RBD、RGW和CephFS都是通过librados访问的,目前提供PHP、Ruby、Java、Python、C和C++支持。

  • CRUSH
    CRUSH是Ceph使用的数据分布算法,类似一致性哈希,让数据分配到预期的地方。

  • RBD
    RBD全称RADOS block device,是Ceph对外提供的块设备服务。

  • RGW
    RGW全称RADOS gateway,是Ceph对外提供的对象存储服务,接口与S3和Swift兼容。

  • CephFS
    CephFS全称Ceph File System,是Ceph对外提供的文件系统服务。

3.Preflight Checklist (预检)

4.安装 Ceph 部署工具

##
#
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
## 
#  http://docs.ceph.com/docs/master/releases/
#  这里如果是ceph.repo,后面ceph-deploy install 时执行可能会报错
#
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
## 
#  aliyun yum 源安装ceph
#  http://docs.ceph.com/docs/master/releases/
#  这里如果是ceph.repo,后面ceph-deploy install 时执行可能会报错
#
cat << EOM > /etc/yum.repos.d/ceph-deploy.repo
[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md

EOM
## 
#  更新软件库并安装 ceph-deploy-2.0.1-0.noarch
#
sudo yum update
sudo yum install ceph-deploy

5.Ceph 节点安装

##
#  管理节点必须能够通过 SSH 无密码地访问各 Ceph 节点。
#
5.1.安装 NTP
##
#  在所有 Ceph 节点上安装 NTP 服务(特别是 Ceph Monitor 节点),以免因时钟漂移导致故障,详情见时钟。http://docs.ceph.com/docs/master/rados/configuration/mon-config-ref#clock

sudo yum install ntp ntpdate ntp-doc
5.2.安装 SSH 服务器
## 
#  在所有 Ceph 节点上执行
#
sudo yum install openssh-server
5.3.创建部署 Ceph 的用户
##
#  如果使用root用户就不需要操作
#  将{username}改为自己的用户
ssh user@ceph-server
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}

##
# 确保各 Ceph 节点上新创建的用户都有 sudo 权限。
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
5.4.无密码 SSH 登录
5.5.开放所需端口
##
#  Ceph Monitors 之间默认使用 6789 端口通信, OSD 之间默认用 6800:7300 这个范围内的端口通信。
#
5.6. 关闭 SELinux
##
#  SELinux 配置永久生效,需修改其配置文件 /etc/selinux/config 。
5.7.优先级/首选项
##
#  确保你的包管理器安装了优先级/首选项包且已启用。在 CentOS 上你也许得安装 EPEL ,在 RHEL 上你也许得启用可选软件库。
sudo yum install yum-plugin-priorities

6.存储集群

##
#  新建一个集群目录
mkdir my-cluster
cd my-cluster
6.1.重新安装(清空)
##
#  Starting over:如果在某些地方碰到麻烦,想从头再来,可以用下列命令清除配置:

ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*

## 
#  用下列命令可以连 Ceph 安装包一起清除:

ceph-deploy purge {ceph-node} [{ceph-node}]

## 如果执行了 purge ,你必须重新安装 Ceph 。
6.2.Create the Cluster
##
#  在管理节点上,进入刚创建的放置配置文件的目录,用 ceph-deploy 执行如下步骤。
#  创建集群。

ceph-deploy new {initial-monitor-node(s)}
##
#
[root@ec-k8s-n1 my-cluster]# ceph-deploy new ec-k8s-n1 ec-k8s-n3
Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 18, in <module>
    from ceph_deploy.cli import main
  File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
    import pkg_resources
ImportError: No module named pkg_resources

##
#  原因是缺python-setuptools,安装它即可
yum install python-setuptools
##
#
[root@ec-k8s-n1 my-cluster]# ceph-deploy new ec-k8s-n1 ec-k8s-n3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ec-k8s-n1 ec-k8s-n3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7fc4f0a1bd70>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc4f01a46c8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ec-k8s-n1', 'ec-k8s-n3']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ec-k8s-n1][DEBUG ] connected to host: ec-k8s-n1 
[ec-k8s-n1][DEBUG ] detect platform information from remote host
[ec-k8s-n1][DEBUG ] detect machine type
[ec-k8s-n1][DEBUG ] find the location of an executable
[ec-k8s-n1][INFO  ] Running command: /usr/sbin/ip link show
[ec-k8s-n1][INFO  ] Running command: /usr/sbin/ip addr show
[ec-k8s-n1][DEBUG ] IP addresses found: [u'10.107.252.228', u'172.16.0.96', u'10.96.0.1', u'172.17.0.1', u'172.16.0.99', u'172.16.0.61', u'10.99.220.21', u'192.168.231.128', u'10.96.0.10', u'10.0.2.15', u'172.16.0.200', u'10.110.81.157', u'10.108.137.69']
[ceph_deploy.new][DEBUG ] Resolving host ec-k8s-n1
[ceph_deploy.new][DEBUG ] Monitor ec-k8s-n1 at 172.16.0.61
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ec-k8s-n3][DEBUG ] connected to host: ec-k8s-n1 
[ec-k8s-n3][INFO  ] Running command: ssh -CT -o BatchMode=yes ec-k8s-n3
[ec-k8s-n3][DEBUG ] connected to host: ec-k8s-n3 
[ec-k8s-n3][DEBUG ] detect platform information from remote host
[ec-k8s-n3][DEBUG ] detect machine type
[ec-k8s-n3][DEBUG ] find the location of an executable
[ec-k8s-n3][INFO  ] Running command: /usr/sbin/ip link show
[ec-k8s-n3][INFO  ] Running command: /usr/sbin/ip addr show
[ec-k8s-n3][DEBUG ] IP addresses found: [u'10.107.252.228', u'192.168.136.128', u'172.16.0.96', u'10.96.0.1', u'172.17.0.1', u'172.16.0.63', u'10.0.2.15', u'10.99.220.21', u'10.96.0.10', u'172.16.0.99', u'10.110.81.157', u'10.108.137.69']
[ceph_deploy.new][DEBUG ] Resolving host ec-k8s-n3
[ceph_deploy.new][DEBUG ] Monitor ec-k8s-n3 at 172.16.0.63
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ec-k8s-n1', 'ec-k8s-n3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.16.0.61', '172.16.0.63']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

##
#  在当前目录下用 ls 和 cat 检查 ceph-deploy 的输出,应该有一个 Ceph 配置文件、一个 monitor 密钥环和一个日志文件。
[root@ec-k8s-n1 my-cluster]# ll
total 12
-rw-r--r-- 1 root root  198 Nov  4 16:39 ceph.conf
-rw-r--r-- 1 root root 3162 Nov  4 16:39 ceph-deploy-ceph.log
-rw------- 1 root root   73 Nov  4 16:39 ceph.mon.keyring
##
#   修改默认副本数,ceph.conf
#   osd pool default size = 2
echo "osd_pool_default_size = 2" >> ceph.conf

##
#  允许删除pool
echo "[mon]" >> ceph.conf
echo "mon_allow_pool_delete = true" >> ceph.conf
##
#  如果你有多个网卡,可以把 public network 写入 Ceph 配置文件的 [global] 段下。ceph.conf
public network = {ip-address}/{netmask}

7.安装 Ceph

##
#
[root@ec-k8s-n1 my-cluster]# ceph-deploy install ec-k8s-n1 ec-k8s-n3 ec-k8s-n2
[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'

##
#  将/etc/yum.repos.d/ceph.repo改名为/etc/yum.repos.d/ceph-deploy.repo
#
[ec-k8s-n1][WARNIN] No data was received after 300 seconds, disconnecting...
[ec-k8s-n1][INFO  ] Running command: ceph --version
[ec-k8s-n1][ERROR ] Traceback (most recent call last):
[ec-k8s-n1][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/process.py", line 119, in run
[ec-k8s-n1][ERROR ]     reporting(conn, result, timeout)
[ec-k8s-n1][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/log.py", line 13, in reporting
[ec-k8s-n1][ERROR ]     received = result.receive(timeout)
[ec-k8s-n1][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py", line 704, in receive
[ec-k8s-n1][ERROR ]     raise self._getremoteerror() or EOFError()
[ec-k8s-n1][ERROR ] RemoteError: Traceback (most recent call last):
[ec-k8s-n1][ERROR ]   File "<string>", line 1036, in executetask
[ec-k8s-n1][ERROR ]   File "<remote exec>", line 12, in _remote_run
[ec-k8s-n1][ERROR ]   File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
[ec-k8s-n1][ERROR ]     errread, errwrite)
[ec-k8s-n1][ERROR ]   File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
[ec-k8s-n1][ERROR ]     raise child_exception
[ec-k8s-n1][ERROR ] OSError: [Errno 2] No such file or directory
[ec-k8s-n1][ERROR ] 
[ec-k8s-n1][ERROR ] 
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version

## 国外yum源网速比较慢,超时安装失败。先手动本地安装,yum -y localinstall /ceph ceph-radosgw 
7.1.配置初始 monitor(s)、并收集所有密钥:
##
#  完成上述操作后,当前目录里应该会出现这些密钥环:
#    {cluster-name}.client.admin.keyring
#    {cluster-name}.bootstrap-osd.keyring
#    {cluster-name}.bootstrap-mds.keyring
#    {cluster-name}.bootstrap-rgw.keyring

ceph-deploy mon create-initial

......
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmp0VaJSC


[root@ec-k8s-n1 my-cluster]# ll
total 476
-rw------- 1 root root    113 Nov  4 19:50 ceph.bootstrap-mds.keyring
-rw------- 1 root root    113 Nov  4 19:50 ceph.bootstrap-mgr.keyring
-rw------- 1 root root    113 Nov  4 19:50 ceph.bootstrap-osd.keyring
-rw------- 1 root root    113 Nov  4 19:50 ceph.bootstrap-rgw.keyring
-rw------- 1 root root    151 Nov  4 19:50 ceph.client.admin.keyring
-rw-r--r-- 1 root root    224 Nov  4 16:50 ceph.conf
-rw-r--r-- 1 root root 228584 Nov  4 19:50 ceph-deploy-ceph.log
-rw------- 1 root root     73 Nov  4 16:39 ceph.mon.keyring
7.2. copy the configuration file and admin key
##
#  Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

[root@ec-k8s-n1 my-cluster]# ceph-deploy admin ec-k8s-n1 ec-k8s-n3 ec-k8s-n2

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ec-k8s-n1 ec-k8s-n3 ec-k8s-n2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f497beca7e8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ec-k8s-n1', 'ec-k8s-n3', 'ec-k8s-n2']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f497cbdf1b8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ec-k8s-n1
[ec-k8s-n1][DEBUG ] connected to host: ec-k8s-n1 
[ec-k8s-n1][DEBUG ] detect platform information from remote host
[ec-k8s-n1][DEBUG ] detect machine type
[ec-k8s-n1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ec-k8s-n3
[ec-k8s-n3][DEBUG ] connected to host: ec-k8s-n3 
[ec-k8s-n3][DEBUG ] detect platform information from remote host
[ec-k8s-n3][DEBUG ] detect machine type
[ec-k8s-n3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ec-k8s-n2
[ec-k8s-n2][DEBUG ] connected to host: ec-k8s-n2 
[ec-k8s-n2][DEBUG ] detect platform information from remote host
[ec-k8s-n2][DEBUG ] detect machine type
[ec-k8s-n2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
7.3.Deploy  Manager Daemon
##
#  Deploy a manager daemon. (Required only for luminous+ >= 12.x builds):

[root@ec-k8s-n1 my-cluster]# ceph-deploy mgr create ec-k8s-n1 ec-k8s-n3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ec-k8s-n1 ec-k8s-n3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ec-k8s-n1', 'ec-k8s-n1'), ('ec-k8s-n3', 'ec-k8s-n3')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff53f567a28>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7ff53fe490c8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ec-k8s-n1:ec-k8s-n1 ec-k8s-n3:ec-k8s-n3
[ec-k8s-n1][DEBUG ] connected to host: ec-k8s-n1 
[ec-k8s-n1][DEBUG ] detect platform information from remote host
[ec-k8s-n1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ec-k8s-n1
[ec-k8s-n1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ec-k8s-n1][WARNIN] mgr keyring does not exist yet, creating one
[ec-k8s-n1][DEBUG ] create a keyring file
[ec-k8s-n1][DEBUG ] create path recursively if it doesn't exist
[ec-k8s-n1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ec-k8s-n1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ec-k8s-n1/keyring
[ec-k8s-n1][INFO  ] Running command: systemctl enable ceph-mgr@ec-k8s-n1
[ec-k8s-n1][INFO  ] Running command: systemctl start ceph-mgr@ec-k8s-n1
[ec-k8s-n1][INFO  ] Running command: systemctl enable ceph.target
[ec-k8s-n3][DEBUG ] connected to host: ec-k8s-n3 
[ec-k8s-n3][DEBUG ] detect platform information from remote host
[ec-k8s-n3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ec-k8s-n3
[ec-k8s-n3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ec-k8s-n3][WARNIN] mgr keyring does not exist yet, creating one
[ec-k8s-n3][DEBUG ] create a keyring file
[ec-k8s-n3][DEBUG ] create path recursively if it doesn't exist
[ec-k8s-n3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ec-k8s-n3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ec-k8s-n3/keyring
[ec-k8s-n3][INFO  ] Running command: systemctl enable ceph-mgr@ec-k8s-n3
[ec-k8s-n3][INFO  ] Running command: systemctl start ceph-mgr@ec-k8s-n3
[ec-k8s-n3][INFO  ] Running command: systemctl enable ceph.target

7.4.Add OSDs

前提是要有闲置的空间,用gdisk 创建 lvm 分区

[root@ec-k8s-n1 ~]# gdisk /dev/sdb 
GPT fdisk (gdisk) version 0.8.6

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries.

Command (? for help): p
Disk /dev/sdb: 58720256 sectors, 28.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): AE39ECE4-AD30-46FF-A367-543BA0443EDB
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 58720222
Partitions will be aligned on 2048-sector boundaries
Total free space is 58720189 sectors (28.0 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name


Command (? for help): n
Partition number (1-128, default 1): 
First sector (34-58720222, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-58720222, default = 58720222) or {+-}size{KMGTP}: 
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8e00
Changed type of partition to 'Linux LVM'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.

[root@ec-k8s-n1 ~]# partprobe 

[root@ec-k8s-n1 ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0   58G  0 disk 
├─sda1               8:1    0    1G  0 part /boot
└─sda2               8:2    0   57G  0 part 
  ├─centos-root    253:0    0   37G  0 lvm  /
  ├─centos-swap    253:1    0    2G  0 lvm  
  └─centos-home    253:2    0   18G  0 lvm  /home
sdb                  8:16   0   28G  0 disk 
└─sdb1               8:17   0   28G  0 part
##
#  ec-k8s-n1: osd create
#
[root@ec-k8s-n1 my-cluster]# ceph-deploy osd create --data /dev/sdb1 ec-k8s-n1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb1 ec-k8s-n1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff38fb3a7a0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ec-k8s-n1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7ff38fb6e848>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb1
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb1
[ec-k8s-n1][DEBUG ] connected to host: ec-k8s-n1 
[ec-k8s-n1][DEBUG ] detect platform information from remote host
[ec-k8s-n1][DEBUG ] detect machine type
[ec-k8s-n1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ec-k8s-n1
[ec-k8s-n1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ec-k8s-n1][DEBUG ] find the location of an executable
[ec-k8s-n1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb1
[ec-k8s-n1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ec-k8s-n1][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 7b48bc71-97bb-4b81-911c-32546dd34600
[ec-k8s-n1][DEBUG ] Running command: /usr/sbin/vgcreate --force --yes ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb /dev/sdb1
[ec-k8s-n1][DEBUG ]  stdout: Physical volume "/dev/sdb1" successfully created.
[ec-k8s-n1][DEBUG ]  stdout: Volume group "ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb" successfully created
[ec-k8s-n1][DEBUG ] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-7b48bc71-97bb-4b81-911c-32546dd34600 ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb
[ec-k8s-n1][DEBUG ]  stdout: Wiping xfs signature on /dev/ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb/osd-block-7b48bc71-97bb-4b81-911c-32546dd34600.
[ec-k8s-n1][DEBUG ]  stdout: Logical volume "osd-block-7b48bc71-97bb-4b81-911c-32546dd34600" created.
[ec-k8s-n1][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ec-k8s-n1][DEBUG ] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ec-k8s-n1][DEBUG ] Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-0
[ec-k8s-n1][DEBUG ] Running command: /bin/chown -h ceph:ceph /dev/ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb/osd-block-7b48bc71-97bb-4b81-911c-32546dd34600
[ec-k8s-n1][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-10
[ec-k8s-n1][DEBUG ] Running command: /bin/ln -s /dev/ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb/osd-block-7b48bc71-97bb-4b81-911c-32546dd34600 /var/lib/ceph/osd/ceph-0/block
[ec-k8s-n1][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ec-k8s-n1][DEBUG ]  stderr: got monmap epoch 1
[ec-k8s-n1][DEBUG ] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAFrt9bX9vIJhAADn955M6tJL+aFsmfjSkeCw==
[ec-k8s-n1][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ec-k8s-n1][DEBUG ] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQAFrt9bX9vIJhAADn955M6tJL+aFsmfjSkeCw== with 0 caps)
[ec-k8s-n1][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ec-k8s-n1][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ec-k8s-n1][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 7b48bc71-97bb-4b81-911c-32546dd34600 --setuser ceph --setgroup ceph
[ec-k8s-n1][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdb1
[ec-k8s-n1][DEBUG ] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb/osd-block-7b48bc71-97bb-4b81-911c-32546dd34600 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ec-k8s-n1][DEBUG ] Running command: /bin/ln -snf /dev/ceph-3a78f424-2f70-4dca-8ed8-5eba09c82ccb/osd-block-7b48bc71-97bb-4b81-911c-32546dd34600 /var/lib/ceph/osd/ceph-0/block
[ec-k8s-n1][DEBUG ] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ec-k8s-n1][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-10
[ec-k8s-n1][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ec-k8s-n1][DEBUG ] Running command: /bin/systemctl enable ceph-volume@lvm-0-7b48bc71-97bb-4b81-911c-32546dd34600
[ec-k8s-n1][DEBUG ]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-7b48bc71-97bb-4b81-911c-32546dd34600.service to /usr/lib/systemd/system/ceph-volume@.service.
[ec-k8s-n1][DEBUG ] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ec-k8s-n1][DEBUG ]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ec-k8s-n1][DEBUG ] Running command: /bin/systemctl start ceph-osd@0
[ec-k8s-n1][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 0
[ec-k8s-n1][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb1
[ec-k8s-n1][INFO  ] checking OSD status...
[ec-k8s-n1][DEBUG ] find the location of an executable
[ec-k8s-n1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ec-k8s-n1 is now ready for osd use.

##
#  ec-k8s-n2: osd create
# 
#
[root@ec-k8s-n1 my-cluster]# ceph-deploy osd create --data /dev/sdb1 ec-k8s-n2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb1 ec-k8s-n2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8aafe297a0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
......

[ec-k8s-n2][DEBUG ]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-c2887505-aee5-46d3-94ac-7253250e3d0d.service to /usr/lib/systemd/system/ceph-volume@.service.
[ec-k8s-n2][DEBUG ] Running command: /bin/systemctl enable --runtime ceph-osd@1
[ec-k8s-n2][DEBUG ]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.
[ec-k8s-n2][DEBUG ] Running command: /bin/systemctl start ceph-osd@1
[ec-k8s-n2][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 1
[ec-k8s-n2][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb1
[ec-k8s-n2][INFO  ] checking OSD status...
[ec-k8s-n2][DEBUG ] find the location of an executable
[ec-k8s-n2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ec-k8s-n2 is now ready for osd use.

##
#  ec-k8s-n3: osd create
#
[root@ec-k8s-n1 my-cluster]# ceph-deploy osd create --data /dev/sdb1 ec-k8s-n3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb1 ec-k8s-n3

......

[ec-k8s-n3][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 2
[ec-k8s-n3][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb1
[ec-k8s-n3][INFO  ] checking OSD status...
[ec-k8s-n3][DEBUG ] find the location of an executable
[ec-k8s-n3][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ec-k8s-n3 is now ready for osd use.

7.5.Add Metadata Server
##
#  Add a Metadata Server
#  To use CephFS, you need at least one metadata server. Execute the following to create a metadata(元数据) server:

[root@ec-k8s-n1 my-cluster]# ceph-deploy mds create ec-k8s-n1 ec-k8s-n3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ec-k8s-n1 ec-k8s-n3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fec98d6f5f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7fec98dace60>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('ec-k8s-n1', 'ec-k8s-n1'), ('ec-k8s-n3', 'ec-k8s-n3')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ec-k8s-n1:ec-k8s-n1 ec-k8s-n3:ec-k8s-n3
[ec-k8s-n1][DEBUG ] connected to host: ec-k8s-n1 
[ec-k8s-n1][DEBUG ] detect platform information from remote host
[ec-k8s-n1][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ec-k8s-n1
[ec-k8s-n1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ec-k8s-n1][WARNIN] mds keyring does not exist yet, creating one
[ec-k8s-n1][DEBUG ] create a keyring file
[ec-k8s-n1][DEBUG ] create path if it doesn't exist
[ec-k8s-n1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ec-k8s-n1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ec-k8s-n1/keyring
[ec-k8s-n1][INFO  ] Running command: systemctl enable ceph-mds@ec-k8s-n1
[ec-k8s-n1][INFO  ] Running command: systemctl start ceph-mds@ec-k8s-n1
[ec-k8s-n1][INFO  ] Running command: systemctl enable ceph.target
[ec-k8s-n3][DEBUG ] connected to host: ec-k8s-n3 
[ec-k8s-n3][DEBUG ] detect platform information from remote host
[ec-k8s-n3][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ec-k8s-n3
[ec-k8s-n3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ec-k8s-n3][WARNIN] mds keyring does not exist yet, creating one
[ec-k8s-n3][DEBUG ] create a keyring file
[ec-k8s-n3][DEBUG ] create path if it doesn't exist
[ec-k8s-n3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ec-k8s-n3 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ec-k8s-n3/keyring
[ec-k8s-n3][INFO  ] Running command: systemctl enable ceph-mds@ec-k8s-n3
[ec-k8s-n3][INFO  ] Running command: systemctl start ceph-mds@ec-k8s-n3
[ec-k8s-n3][INFO  ] Running command: systemctl enable ceph.target

##
#  Check your cluster’s health. at manager node

sudo ceph health

#  Your cluster should report HEALTH_OK. You can view a more complete cluster status with:

sudo ceph -s
##
#
[root@ec-k8s-n1 my-cluster]# ll /var/run/ceph/
总用量 0
srwxr-xr-x 1 ceph ceph 0 11月  5 10:09 ceph-mds.ec-k8s-n1.asok
srwxr-xr-x 1 ceph ceph 0 11月  5 10:09 ceph-mgr.ec-k8s-n1.asok
srwxr-xr-x 1 ceph ceph 0 11月  5 10:09 ceph-mon.ec-k8s-n1.asok
srwxr-xr-x 1 ceph ceph 0 11月  5 10:42 ceph-osd.0.asok

[root@ec-k8s-n2 ~]# ll /var/run/ceph/
总用量 0
srwxr-xr-x 1 ceph ceph 0 11月  5 11:16 ceph-osd.1.asok

[root@ec-k8s-n3 ~]# ll /var/run/ceph/
总用量 0
srwxr-xr-x 1 ceph ceph 0 11月  5 09:12 ceph-mds.ec-k8s-n3.asok
srwxr-xr-x 1 ceph ceph 0 11月  5 09:12 ceph-mgr.ec-k8s-n3.asok
srwxr-xr-x 1 ceph ceph 0 11月  5 09:12 ceph-mon.ec-k8s-n3.asok
srwxr-xr-x 1 ceph ceph 0 11月  5 11:37 ceph-osd.2.asok

8.Add an RGW Instance

##
#  To use the Ceph Object Gateway component of Ceph, you must deploy an instance of RGW. 
#  Execute the following to create an new instance of RGW: 
#
[root@ec-k8s-n1 my-cluster]# ceph-deploy rgw create ec-k8s-n1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create ec-k8s-n1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           : [('ec-k8s-n1', 'rgw.ec-k8s-n1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4ca0afae18>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0x7f4ca17c9f50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ec-k8s-n1:rgw.ec-k8s-n1
[ec-k8s-n1][DEBUG ] connected to host: ec-k8s-n1 
[ec-k8s-n1][DEBUG ] detect platform information from remote host
[ec-k8s-n1][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ec-k8s-n1
[ec-k8s-n1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ec-k8s-n1][WARNIN] rgw keyring does not exist yet, creating one
[ec-k8s-n1][DEBUG ] create a keyring file
[ec-k8s-n1][DEBUG ] create path recursively if it doesn't exist
[ec-k8s-n1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ec-k8s-n1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ec-k8s-n1/keyring
[ec-k8s-n1][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.ec-k8s-n1
[ec-k8s-n1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ec-k8s-n1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ec-k8s-n1][INFO  ] Running command: systemctl start ceph-radosgw@rgw.ec-k8s-n1
[ec-k8s-n1][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ec-k8s-n1 and default port 7480
##
#
#  By default, the RGW instance will listen on port 7480. This can be changed by editing ceph.conf on the node running the RGW as follows:

[client]
rgw frontends = civetweb port=80


# To use an IPv6 address, use:

[client]
rgw frontends = civetweb port=[::]:80

9.Install Ceph-Client

##
#  Ceph -client节点,不要与Ceph存储集群节点在相同的物理节点上(除非使用VM)。

[root@ec-k8s-n1 my-cluster]# yum install -y redhat-lsb

[root@ec-k8s-n1 ~]# lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.5.1804 (Core) 
Release:	7.5.1804
Codename:	Core
[root@ec-k8s-n1 ~]# uname -r
3.10.0-862.14.4.el7.x86_64

[root@ec-k8s-n1 my-cluster]# ceph-deploy install ceph-client
.....
 [ceph_deploy][ERROR ] RuntimeError: connecting to host: ceph-client resulted in errors: HostNotFound ceph-client

## 
#  解决方法:
#
[root@ec-k8s-n1 my-cluster]# vim /etc/hosts
# client-ip
172.16.0.51  ceph-client

参考资料:

http://docs.ceph.com/docs/master/start/quick-start-preflight/
http://docs.ceph.org.cn/start/quick-start-preflight/
https://blog.csdn.net/uxiAD7442KMy1X86DtM3/article/details/81059215