集群规模: 1个monitor , 2个osd 机器
monitor—| _172.16.x.x (ip2,10.1.x.x/24) ceph01
|—- osd1 _172.16.x.x (ip2,10.1.x.x/24) ceph02
|—- osd2 _172.16.x.x (ip2,10.1.x.x/24) ceph03
1、准备工作
所有节点更新系统,安装ceph-deploy
sudo yum update && sudo yum install ceph-deploy
所有节点安装ntp,ssh
sudo yum install ntp ntpdate ntp-doc
sudo yum install openssh-server
ceph管理机必须以普通用户登陆所有权限,所以ceph用户要拥有无密码的sudo权限
1>创建ceph普通用户
ansible all-m shell -a ‘useradd ceph && echo “ceph” | passwd –stdin ceph’
2>确保节点拥有sudo权限
echo “ceph ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
3>生成Ceph用户的key,并到各机器免密
ssh-copy-id ceph@ceph01
ssh-copy-id ceph@ceph02
ssh-copy-id ceph@ceph03
4>ceph的 config文件,保证config 权限为600,保证可以ssh ceph01 ceph02 ceph03
/home/ceph/.ssh && touch config && chmod 600 ./config
5>所有防火墙全部关掉
ansible all -m shell -a “systemctl disabled firewalld && systemctl stop firewalld && setenforce 0”
6>centos上安装yum-plugin
yum install yum-plugin-priorities
2、ceph-cluster安装
1>创建目录,进入到实施目录my-cluster
su ceph && cd && mkdir my-cluster && cd my-cluster
2>创建集群,执行以下命令会生成三个文件(ceph配置文件、monitor 密钥、日志文件)
ceph-deploy new ceph01
3>修改默认复本数量/网卡
osd pool default size = 2
public network = 172.16.x.0/24 对外提供访问
cluster network = 10.1.x.0/24 内部心跳线
4>安装ceph
ceph-deploy install ceph01 ceph02 ceph03
5>配置初始 monitor(s)、并收集所有密钥,并收集
ceph-deploy mon create-initial
ceph-deploy gatherkeys ceph01
6>查看节点磁盘,并初始化所有节点磁盘
ceph-deploy disk list ceph02
ceph-deploy disk zap cehp02:sdb
7、激活osd
ceph-disk activate-all
或者在部署机上执行,所有osd 节点都要执行,保证可以正常加入集群
ceph-deploy osd prepare ceph02:/dev/sdb:/dev/sdc
ceph-deploy osd activate ceph02:/dev/sdb1:/dev/sdc1
8、集群状态
原创文章,作者:carmelaweatherly,如若转载,请注明出处:https://blog.ytso.com/183276.html