반응형
[root@mgmt ~]# ceph-deploy osd create --data /dev/vdb osd0
...
[osd0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
이렇게 되면서 에러 발생 시, 기존 데이터가 있기때문에 발생하는 것으로, 데이터를 초기화 한뒤에 진행하면 됩니다.
[해결책]
[root@osd0 ~]# parted -s /dev/vdb mklabel gpt mkpart primary xfs 0% 100%
[root@osd0 ~]# mkfs.xfs /dev/vdb -f
meta-data=/dev/vdb isize=512 agcount=4, agsize=1966080 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=7864320, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=3840, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@osd0 ~]# reboot
반응형
'BlockStorage(Ceph)' 카테고리의 다른 글
Ceph Nautilus 설치 (0) | 2020.11.13 |
---|---|
Module 'restful' has failed dependency: No module named 'pecan' (0) | 2020.11.01 |
Ceph 업그레이드 (Jewel -> Luminous) (0) | 2020.10.28 |
[WARN]Ceph Monitor Clock Skew detected (0) | 2020.06.30 |
Ceph OSD 리부팅 (0) | 2020.06.30 |