Client 쪽에서 블록스토리지로 접근할때, 아래와 같이 에러메시지가 뜨고 있습니다.
[root@localhost ~]# rbd --mon_host xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx --conf /dev/null --keyring /dev/null --name client.client.keyring --key xxxxxxxxxxxxxxxxxxxxxxx --pool test_pool map test.img
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (5) Input/output error
# dmesg
[17261.217108] libceph: mon0 xxx.xxx.xxx.xxx:6789 missing required protocol features
[17271.229206] libceph: mon1 xxx.xxx.xxx.xxx:6789 feature set mismatch, my 102b84a842a42 < server's 40102b84a842a42, missing 400000000000000
[17271.231587] libceph: mon1 xxx.xxx.xxx.xxx:6789 missing required protocol features
[17281.246741] libceph: mon1 xxx.xxx.xxx.xxx:6789 feature set mismatch, my 102b84a842a42 < server's 40102b84a842a42, missing 400000000000000
MGMT에서 아래와 같이 해결해주면 됩니다.
[root@MGMT ceph]# ceph osd crush tunables hammer
adjusted tunables profile to hammer
참고)
https://stackoverflow.com/questions/48026677/ceph-luminous-rbd-map-hangs-forever
'BlockStorage(Ceph)' 카테고리의 다른 글
[ERROR] ceph application not enabled on 1 pool(s) (0) | 2021.01.30 |
---|---|
[ERROR] RuntimeError: Unable to create a new OSD id (0) | 2021.01.25 |
Ceph Nautilus 설치 (0) | 2020.11.13 |
Module 'restful' has failed dependency: No module named 'pecan' (0) | 2020.11.01 |
[ceph-deploy error] RuntimeError: command returned non-zero exit status: 1 (0) | 2020.11.01 |