YUMSERV
Published 2020. 5. 31. 10:43
CEPH 설치 BlockStorage(Ceph)
반응형

사전준비 이후에 ceph-deploy를 통해서 설치하는 방법입니다.

ceph-deploy는 ceph 설치 및 운영시 사용되는 관리 패키지로, 설정변경시에 일괄적으로 변경이 가능하며, 배포등의 작업을 편리하게 할 수 있습니다.


[사전준비작업 관련 글]


* 모든 작업은 mgmt 서버에서 진행됩니다.

* 설치환경은 CENTOS7로 모두 동일합니다.


1. ceph-deploy 설치

[root@mgmt ~]# yum install ceph-deploy -y

ceph-deploy는 ceph클러스터를 쉽고 빠르게 배포할 수 있는 도구입니다. ceph-deploy를 이용해서 관리노드에 ceph를 설치를 진행합니다.



2. mon 서버 구성 

mon의 경우, 최소 1대에서 여러대로 할 수 있으며, 여기서는 mgmt서버가 mon-0의 역할, mon 서버가 mon-1의 역할을 하도록 구성하였습니다. 


초기 MON을 구성하는 명령어로

# ceph-deploy new [MON] [MON]

[root@mgmt ~]# ceph-deploy new mgmt mon


원격 호스트에서 ceph 패키지를 설치합니다.
# ceph-deploy install [HOST] [HOST]
[root@mgmt ~]# ceph-deploy install --repo-url 'https://download.ceph.com/rpm-jewel/el7' --gpg-url 'https://download.ceph.com/keys/release.asc' mgmt mon osd-0 osd-1

...

[osd-1][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e


설치를 진행하게 되면, ceph.conf 파일이 생성되게 됩니다. 처음 ceph.conf 파일이 생성되게 되면 아래와 같이 옵션이 따로 주어지지 않습니다. 저는 옵션을 추가한 내용이며, 추가할 사항이 있을 경우 ceph.conf 파일 수정하면됩니다.

[root@mgmt ~]# cat ceph.conf

[global]

fsid = 2a289202-e471-47a5-b672-929818b2bf6b

mon_initial_members = mgmt, mon-0

mon_host = 10.1.0.5,10.1.0.6

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true


osd_pool_default_size = 2

rbd_default_features = 1

osd_journal_size = 10240

osd_pool_default_pg_num = 128

osd_pool_default_pgp_num = 128


[mon]

mon_host = mgmt, mon-0

mon_addr = 10.1.0.5,10.1.0.6

mon_clock_drift_allowed = .3

mon_clock_drift_warn_backoff = 30    # Tell the monitor to backoff from this warning for 30 seconds

mon_osd_full_ratio = .90

mon_osd_nearfull_ratio = .85

mon_osd_report_timeout = 300

debug_ms = 1

debug_mon = 20

debug_paxos = 20

debug_auth = 20


[mon.0]

host = mgmt

mon_addr = 10.1.0.5:6789


[mon.1]

host = mon-0

mon_addr = 10.1.0.6:6789


[mds]

mds_standby_replay = true

mds_cache_size = 250000

debug_ms = 1

debug_mds = 20

debug_journaler = 20


[mds.0]

host = mgmt


[osd.0]

host = osd-0


[osd.1]

host = osd-1


위의 ceph.conf 파일 내용을 수정 후, mon 초기 설치를 진행합니다.

# ceph-deploy mon create-initial

[root@mgmt ~]# ceph-deploy --overwrite-conf mon create-initial 

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x266a320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x26605f0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts mgmt mon-0

각 호스트에 지정하여 mon을 배포합니다.
# ceph-deploy mon create [HOST] [HOST]
[root@mgmt ~]# ceph-deploy mon create mgmt mon-0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create mgmt mon-0
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x23c4320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['mgmt', 'mon-0']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x23bb5f0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts mgmt mon-0
[ceph_deploy.mon][DEBUG ] detecting platform for host mgmt ...

mon을 배포한 호스트에 key파일도 배포합니다.
# ceph-deploy gatherkeys [HOST] [HOST]
[root@mgmt ~]# ceph-deploy gatherkeys mgmt mon-0
ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy gatherkeys mgmt mon-0
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1cf6c20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['mgmt', 'mon-0']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x1c950c8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
...
...
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.client.admin.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-mds.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-mgr.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-osd.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.bootstrap-rgw.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpMmrl74


키 배포후에 ceph health를 체크해보면 error 로 뜨는것을 볼 수 있습니다.
아직 osd가 설치되어있지 않기 때문에 err 상태로 뜨게 됩니다.
[root@mgmt ~]# ceph -s
    cluster 2a289202-e471-47a5-b672-929818b2bf6b
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs stuck inactive
            64 pgs stuck unclean
            no osds
     monmap e1: 2 mons at {mgmt=10.1.0.5:6789/0,mon-0=10.1.0.6:6789/0}
            election epoch 4, quorum 0,1 mgmt,mon-0
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating


3. OSD 설치

현재 연결되어있는 추가 디스크를 ceph와 함께 사용하기 위해, 추가디스크 쪽 파티션과 테이블을 삭제합니다.
아래 명령어 시에 디스크 안에 있는 데이터는 삭제됩니다.
# ceph-deploy disk zap [OSD HOST]:[disk]
[root@mgmt ~]# ceph-deploy disk zap osd-0:vdb osd-1:vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy disk zap osd-0:vdb osd-1:vd
...
[osd-1][DEBUG ] The operation has completed successfully.

클러스터 생성하고, ceph 패키지를 설치하는 작업을 준비합니다.
# ceph-deploy osd prepare [OSD HOST]:[disk]
[root@mgmt ~]# ceph-deploy osd prepare osd-0:vdb osd-1:vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd prepare osd-0:vdb osd-1:vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('osd-0', '/dev/vdb', None), ('osd-1', '/dev/vdb', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1e1c560>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x1e0de60>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks osd-0:/dev/vdb: osd-1:/dev/vdb:
...
[ceph_deploy.osd][DEBUG ] Host osd-1 is now ready for osd use.

설치 된 osd를 활성화시켜줍니다.
# ceph-deploy osd activate [OSD HOST]:[disk]
[root@mgmt ~]# ceph-deploy osd activate osd-0:vdb1 osd-1:vdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy osd activate osd-0:vdb1 osd-1:vdb1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x25ba560>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x25abe60>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('osd-0', '/dev/vdb1', None), ('osd-1', '/dev/vdb1', None)]
...
[osd-1][INFO  ] checking OSD status...
[osd-1][DEBUG ] find the location of an executable
[osd-1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[osd-1][INFO  ] Running command: systemctl enable ceph.target


4. MDS 설치

MDS 서버를 ceph-deploy를 통해 설치해줍니다.
# ceph-deploy mds create [HOST]
[root@mgmt ~]# ceph-deploy mds create mgmt
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy mds create mgmt
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2065758>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x20095f0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('mgmt', 'mgmt')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts mgmt:mgmt
[mgmt][DEBUG ] connected to host: mgmt 
[mgmt][DEBUG ] detect platform information from remote host
[mgmt][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.4.1708 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to mgmt
[mgmt][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[mgmt][DEBUG ] create path if it doesn't exist
[mgmt][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.mgmt osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-mgmt/keyring
[mgmt][INFO  ] Running command: systemctl enable ceph-mds@mgmt
[mgmt][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@mgmt.service to /usr/lib/systemd/system/ceph-mds@.service.
[mgmt][INFO  ] Running command: systemctl start ceph-mds@mgmt
[mgmt][INFO  ] Running command: systemctl enable ceph.target


5. 관리 호스트 설정

ceph-deploy 명령어를 통해 관리 호스트들을 설정하며, 아래 명령어시에, 각 호스트별로 ceph명령어를 실행시킬 수 있습니다.
# ceph-deploy admin [HOST]
[root@mgmt ~]# ceph-deploy admin mgmt mon-0 osd-0 osd-1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy admin mgmt mon-0 osd-0 osd-1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1df4f80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['mgmt', 'mon-0', 'osd-0', 'osd-1']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x1d4e8c0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to mgmt
[mgmt][DEBUG ] connected to host: mgmt 
[mgmt][DEBUG ] detect platform information from remote host
[mgmt][DEBUG ] detect machine type
[mgmt][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to mon-0
[mon-0][DEBUG ] connected to host: mon-0 
[mon-0][DEBUG ] detect platform information from remote host
[mon-0][DEBUG ] detect machine type
[mon-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to osd-0
[osd-0][DEBUG ] connected to host: osd-0 
[osd-0][DEBUG ] detect platform information from remote host
[osd-0][DEBUG ] detect machine type
[osd-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to osd-1
[osd-1][DEBUG ] connected to host: osd-1 
[osd-1][DEBUG ] detect platform information from remote host
[osd-1][DEBUG ] detect machine type
[osd-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf


6. 설치 완료
설치 후에 ceph health를 체크해보면 정상적으로 OK가 뜨고 있음을 확인할 수 있습니다.
[root@mgmt ~]# ceph -s
    cluster 2a289202-e471-47a5-b672-929818b2bf6b
     health HEALTH_OK
     monmap e1: 2 mons at {mgmt=10.1.0.5:6789/0,mon-0=10.1.0.6:6789/0}
            election epoch 4, quorum 0,1 mgmt,mon-0
     osdmap e11: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v19: 64 pgs, 1 pools, 0 bytes data, 0 objects
            215 MB used, 40722 MB / 40937 MB avail
                  64 active+clean


반응형

'BlockStorage(Ceph)' 카테고리의 다른 글

CEPH OSD 제거  (2) 2020.06.29
CEPH MON 추가(수동)  (0) 2020.06.27
CRUSHMAP 수동으로 설정 변경  (0) 2020.06.08
CEPH pool, crush rule, bucket 생성  (0) 2020.06.08
CEPH 설치 전 사전작업  (0) 2020.05.31
profile

YUMSERV

@lena04301

포스팅이 좋았다면 "좋아요❤️" 또는 "구독👍🏻" 해주세요!