#### adding following lines to the end of /etc/hosts #### change IPs according to your servers 10.0.1.10 ceph0 admin 10.0.1.11 ceph1 10.0.1.12 ceph2 10.0.1.13 ceph3
更改主机名(可选)
将主机名更改为 /etc/hosts 中对应的名字:
1
echo ceph0 > /etc/hostname
ssh 免密登录
1 2 3 4
[ceph@ceph0 ~]$ ssh-keygen -t rsa -b 2048 [ceph@ceph0 ~]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys [ceph@ceph0 ~]$ for i in ceph1 ceph2 ceph3;do ssh-copy-id -i ~/.ssh/id_rsa.pub $i; done [ceph@ceph0 ~]$ for i in ceph1 ceph2 ceph3;do scp ~/.ssh/id_rsa $i:~/.ssh/; done
安装配置 ntp service
1 2 3 4 5
#### on ceph0 [ceph@ceph0 ~]$ for host in ceph{0..3};do > ssh ceph${host}"sudo yum install -y ntp ntpdate" > ssh ceph${host}"sudo systemctl enable --now ntpd" > done
关闭/设置防火墙
1 2 3 4 5
#### disable firewalld [ceph@ceph0 ~]$ for host in ceph{0..3}; do > ssh ceph${host}"sudo systemctl disable firewalld" > ssh ceph${host}"sudo systemctl stop firewalld" > done
如果不关闭防火墙,需要打开 Ceph 所有节点上各服务相应的端口。
1 2 3 4 5 6 7 8 9 10
#### firewalld [ceph@ceph0 ~]$ for host in ceph{1..3}; do firewall-cmd --permanent --add-port=3300/tcp --zone=public firewall-cmd --permanent --add-port=6789/tcp --zone=public firewall-cmd --permanent --add-port=6800-7300/tcp --zone=public # or simply firewall-cmd --permanent --add-service=ceph --zone=public firewall-cmd --permanent --add-service=ceph-mon --zone=public
ceph 集群有一个默认的集群名 - ceph, 如果你想运行多个ceph集群, 可以在部署时通过--cluster name 指定一个名字. 如果在同一硬件上运行多个实例,还需要更改默认的端口设置,以避免冲突.
部署 monitors
创建 三个 monitors:
1 2 3 4
#### hsotname ceph1/2/3 must match the actual `hostname -s` in the remote host [ceph@ceph0 ~]$ ceph-deploy new ceph1 ceph2 ceph3 [ceph@ceph0 ~]$ ceph-deploy mon create ceph1 ceph2 ceph3 #### a ceph.conf and ceph.mon.keyring will be created under current directory
汇集分发keys
1 2 3 4 5 6 7 8 9 10 11
#### monitors keys #### wait for a while after "ceph-deploy mon create" [ceph@ceph0 ~]$ ceph-deploy gatherkeys ceph1 ceph2 ceph3 #### distribute keys to admin nodes in cluster [ceph@ceph0 ~]$ ceph-deploy admin ceph0 ceph1 ceph2 ceph3 #### change permissions for i in ceph{1..3}; do ssh ceph$i"sudo chown -R ceph:ceph /etc/ceph"; ssh ceph$i"sudo chown -R ceph:ceph /var/lib/ceph"; ssh ceph$i"sudo chown -R ceph:ceph /var/log/ceph"; done
optional arguments: -h, --help show this help message and exit --data DATA The OSD data logical volume (vg/lv) or absolute path to device --journal JOURNAL Logical Volume (vg/lv) or path to GPT partition --zap-disk DEPRECATED - cannot zap when creating an OSD --fs-type FS_TYPE filesystem to use to format DEVICE (xfs, btrfs) --dmcrypt use dm-crypt on DEVICE --dmcrypt-key-dir KEYDIR directory where dm-crypt keys are stored --filestore filestore objectstore --bluestore bluestore objectstore --block-db BLOCK_DB bluestore block.db path --block-wal BLOCK_WAL bluestore block.wal path --debug Enable debug mode on remote ceph-volume calls
for id in {osds_to_be_removed}; do ## mark osd out ceph osd out $i ## remove osd from crush map ceph osd crush remove osd.${i} ## delete osd authencation key ceph auth del osd.${i} ## remove osd finally ceph osd rm ${i} done