YUMSERV
article thumbnail
[Ceph] Ceph - ansible 설치
BlockStorage(Ceph) 2021. 7. 4. 10:12

* 환경 : Ubuntu 20.04 * Deploy 서버 1대 * Ceph 서버 3대 * CEPH VERSION : Octopus 0. 초기 설정 (ssh-keygen 설정) $ ssh-keygen $ ssh-copy-id deploy $ ssh-copy-id ceph-1 $ ssh-copy-id ceph-2 $ ssh-copy-id ceph-3 1. Ansible 패키지 설치, ceph-ansible 패키지 다운로드 $ apt update;apt -y install python3-pip $ apt-get install git $ apt-get install ansible $ apt-get install sshpass $ git clone https://github.com/ceph/ceph-ansible..

python-rados 연동
BlockStorage(Ceph) 2021. 4. 16. 08:22

python 으로 ceph 연동 1. 패키지 설치 $ yum install python-rados 2. python ceph 연동 확인 # python Python 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import rados, sys >>> cluster = rados.Rados(conffile='/etc/ceph/ceph.conf') cluster librados 버전 확인 >>> print "\nlibrados version: " + str(clus..

ceph 1 daemons have recently crashed
BlockStorage(Ceph) 2021. 2. 20. 21:01

$ ceph -s cluster: id: 7025ab16-5810-4382-9318-1bd4a704ef48 health: HEALTH_WARN 1 daemons have recently crashed services: mon: 2 daemons, quorum mgmt,mon (age 3m) mgr: mgmt(active, since 47h) mds: 1 up:standby osd: 9 osds: 9 up (since 5m), 9 in (since 47h) data: pools: 1 pools, 128 pgs objects: 4 objects, 35 B usage: 9.8 GiB used, 80 GiB / 90 GiB avail pgs: 128 active+clean Ceph Crash 상태가 뜨면서 cr..

CEPH OSD OS재설치
BlockStorage(Ceph) 2021. 1. 31. 18:51

* 상황은 아래와 같습니다.ceph osd os 디스크 쪽 i/o에러로 인해 디스크를 교체할 상황이 있었으며,ceph data는 추가디스크 쪽에 남아 있기 때문에 데이터에는 문제가 없었으나 ceph를 새로 설치해야 되는 상황이었습니다.레이드 쪽 데이터가 문제가 있었을 경우, osd를 제거 후 다시 추가해야 되겠지만, 데이터에는 문제가 없는 상황이라 기존에 있는 keyring을 갖고와서 마운트 해주는 방식으로 해주었습니다.아래는 해당 절차입니다.* ceph 는 nautilus 버전이며, os는 centos7 버전입니다.* 교체된 OSD는 2번 입니다. 1. 기본 절차os 디스크 교체하기 전 CEPH쪽에 PG를 분산시키지 않게 설정해줍니다.# ceph osd set nobackfill# ceph osd se..

[ERROR] ceph application not enabled on 1 pool(s)
BlockStorage(Ceph) 2021. 1. 30. 21:29

* Ceph VERSION : Nautilus [현재 상태]# ceph -s cluster: id: 3c17be42-37b3-4a82-9f31-45a872e42394 health: HEALTH_WARN application not enabled on 1 pool(s) services: mon: 3 daemons, quorum MGMT,MON0,MON1 (age 15h) mgr: MGMT(active, since 5w) mds: 1 up:standby osd: 4 osds: 4 up (since 15h), 4 in (since 15h) data: pools: 1 pools, 128 pgs objects: 6 objects, 51 B usage: 4.1 GiB used, 15 TiB / 15 TiB avai..

[ERROR] RuntimeError: Unable to create a new OSD id
BlockStorage(Ceph) 2021. 1. 25. 08:25

# vi /var/log/ceph/ceph-osd.log[2021-01-25 08:11:15,763][ceph_volume][ERROR ] exception caught by decoratorTraceback (most recent call last): File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 150, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python2...

[ERROR] missing required protocol features
BlockStorage(Ceph) 2021. 1. 15. 15:05

Client 쪽에서 블록스토리지로 접근할때, 아래와 같이 에러메시지가 뜨고 있습니다.[root@localhost ~]# rbd --mon_host xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx --conf /dev/null --keyring /dev/null --name client.client.keyring --key xxxxxxxxxxxxxxxxxxxxxxx --pool test_pool map test.imgrbd: sysfs write failedIn some cases useful info is found in syslog - try "dmesg | tail" or so.rbd: map failed: (5) Input/output error # dmesg[..

article thumbnail
Ceph Nautilus 설치
BlockStorage(Ceph) 2020. 11. 13. 14:19

* 사전작업을 먼저 진행한 뒤에 아래와 같이 하시길 권장합니다.[CLOUD/BlockStorage(Ceph)] - CEPH 설치 전 사전작업 * 구성은 이렇습니다.CENTOS 7 환경 4대 서버를 구성하며,Node 1 : mgmt, mon0, mgr, mds데몬 실행Node 2 : mon1Node 3 : osd0 ( 추가디스크 1개 )Node 4 : osd1 ( 추가디스크 1개 ) 1. nautilus 패키지 설치# echo '[ceph]name=Ceph packages for $basearchbaseurl=http://download.ceph.com/rpm-nautilus/el7/$basearchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https://downl..

Module 'restful' has failed dependency: No module named 'pecan'
BlockStorage(Ceph) 2020. 11. 1. 21:14

[root@mgmt ~]# ceph -s cluster: id: 173b6a80-8172-4830-a616-0ef2959500ed health: HEALTH_WARN Module 'restful' has failed dependency: No module named 'pecan' Slow OSD heartbeats on front (longest 1165.052ms) 4 slow ops, oldest one blocked for 1195 sec, mon.mgmt has slow ops services: mon: 2 daemons, quorum mgmt,mon0 (age 45m) mgr: mgmt(active, since 40m) mds: 1 up:standby osd: 4 osds: 4 up (since..

[ceph-deploy error] RuntimeError: command returned non-zero exit status: 1
BlockStorage(Ceph) 2020. 11. 1. 20:07

[root@mgmt ~]# ceph-deploy osd create --data /dev/vdb osd0...[osd0][ERROR ] RuntimeError: command returned non-zero exit status: 1[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs 이렇게 되면서 에러 발생 시, 기존 데이터가 있기때문에 발생하는 것으로, 데이터를 초기화 한뒤에 진행하면 됩니다. [해결책][root@osd0 ..