Ceph集群新增osd
比起君子讷于言而敏于行,我更喜欢君子善于言且敏于行。
目录
前言
一、先准备好干净的磁盘
二、修改culster.yaml
1. 备份原来的yaml
2. 修改
3. 应用新的yaml
4. 观察osd pod
5. 看一下集群信息
总结
前言
记录ceph新增osd,实际上增加了osd之后,ceph集群也是会做一个数据均衡的动作,所以如果可用空间很少,那么并不会立刻马上就增加容量。ceph df显示的可使用的空间,是按照最满的那个osd的可用量算的。最好是日常多观测多均衡数据存放。
一、先准备好干净的磁盘
盘不干净的话参考上一篇文章进行预处理
每台设备都确认好 对应磁盘的 by-id/wwn 号
ls -l /dev/disk/by-id/wwn-* | grep sdk ls -l /dev/disk/by-id/wwn-* | grep sdj注意踩坑:此处最好使用 by-id/wwn 号,不要直接用sdX这种,有可能下次加盘sdX会发生变化,全部打乱不好找,一旦打乱超级无敌难对应起来。
二、修改culster.yaml
1. 备份原来的yaml
要找到自己的cluster.yaml,我的路径是自己后期挪动的
cp /home/ubuntu/rook/2-cluster/cluster.yaml /home/ubuntu/rook/2-cluster/cluster.yaml.$(date +%Y.%m.%d_%H:%M).bak2. 修改
直接给新的加到对应的机器的name下面就好
################### OSD 磁盘声明 ################# storage: useAllNodes: false useAllDevices: false config: onlyApplyOSDPlacement: "true" # 强制使用指定节点和设备 forceFormat: "true" nodes: - name: bj10-10-18-102 devices: - name: /dev/disk/by-id/wwn-0x5000c500a6b81d6b #sdc - name: /dev/disk/by-id/wwn-0x5000c500a6b7af77 #sdd - name: /dev/disk/by-id/wwn-0x5000c500a6b842d3 #sde - name: /dev/disk/by-id/wwn-0x5000c50085fe1a4f #sdh - name: /dev/disk/by-id/wwn-0x5000c500a65ebef7 #sdi - name: /dev/disk/by-id/wwn-0x5000c500a6b7d013 #sdf #hdd config: osdsPerDevice: "1" deviceClass: hdd_new metadataDevice: /dev/disk/by-id/ata-Lenovo_SSD_SL700_2TB_LSL702T0B56LV00544 #sdg # ssd(DB/WAL) - name: /dev/disk/by-id/wwn-0x5000c500cbe839d1 #sdj - name: sh10-10-18-129 devices: - name: /dev/disk/by-id/wwn-0x5000c500a6b3a10b #sdd - name: /dev/disk/by-id/wwn-0x5000c500a6b2c53b #sdg - name: /dev/disk/by-id/wwn-0x5000c500a6b3c4e7 #sde - name: /dev/disk/by-id/wwn-0x5000c500a6b853c7 #sdf - name: /dev/disk/by-id/wwn-0x5000c500a6b85e2f #sdh - name: /dev/disk/by-id/wwn-0x5000c50085fe1e87 #sdi-hdd config: osdsPerDevice: "1" deviceClass: hdd_new metadataDevice: /dev/disk/by-id/ata-Lenovo_SSD_SL700_2TB_LSL702T0B56LV00537 #sdb # ssd(DB/WAL) - name: /dev/disk/by-id/wwn-0x5000c500c08391c2 #sdk - name: sh10-10-18-130 devices: - name: /dev/disk/by-id/wwn-0x5000c500a6b3a7fb #原sdf - name: /dev/disk/by-id/wwn-0x5000cca252383038 #原sdg - name: /dev/disk/by-id/wwn-0x5000c500a6b2a4f3 #原sdh - name: /dev/disk/by-id/wwn-0x5000039ad828d665 #sdc - name: /dev/disk/by-id/wwn-0x5000c500a6b82aa7 #sdj - name: /dev/disk/by-id/wwn-0x5000c500a6b82d2f #原sde #hdd config: osdsPerDevice: "1" deviceClass: hdd_new metadataDevice: /dev/disk/by-id/ata-Lenovo_SSD_SL700_2TB_LSL702T0B56LV00535 #原sdd # ssd(DB/WAL) - name: /dev/disk/by-id/wwn-0x5000c500c0e26cd2 #sdk3. 应用新的yaml
kubectl apply -f ./cluster.yaml4. 观察osd pod
耐心等一下,不会是秒级的出现,大概会等1-2分钟甚至更久,会发现有新的osd pod出现了
kubectl -n rook-ceph get pods -l app=rook-ceph-osd5. 看一下集群信息
这个要去pod里面看
#先进到rook-ceph的pod kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash #查看集群 ceph osd df tree ceph -s总结
新增osd相对来说还是比较简单的,yaml更新也不会影响之前的内容。
