YUMSERV
article thumbnail
[Zabbix] apache 모니터링
MONITORING 2021. 3. 14. 15:03

1. Apache 설정 $ vi /etc/httpd/conf.modules.d/00-base.conf LoadModule status_module modules/mod_status.so $ cat /etc/httpd/conf.d/server-status.conf # 아래 내용 추가 SetHandler server-status Require ip xxx.xxx.xxx.xxx/32 Require host localhost #uncomment to only allow requests from localhost $ service httpd restart Redirecting to /bin/systemctl restart httpd.service $ netstat -nltp Active Internet conne..

article thumbnail
[Zabbix] zabbix-agent 5.2 설치
MONITORING 2021. 3. 13. 20:37

1. Zabbix-agent 패키지 설치 $ wget https://repo.zabbix.com/zabbix/5.2/rhel/7/x86_64/zabbix-release-5.2-1.el7.noarch.rpm $ rpm -Uvh zabbix-release-5.2-1.el7.noarch.rpm warning: zabbix-release-5.2-1.el7.noarch.rpm: Header V4 RSA/SHA512 Signature, key ID a14fe591: NOKEY Preparing... ################################# [100%] Updating / installing... 1:zabbix-release-5.2-1.el7 #############################..

article thumbnail
[Zabbix] Zabbix 5.2 설치(ubuntu 18.04)
MONITORING 2021. 3. 13. 20:07

* Ubuntu 18.04 설치 진행 1. DB 설치 및 설정 $ apt-get install mariadb-server mariadb-client mariadb-common $ systemctl start mariadb $ systemctl enable mariadb $ netstat -nltp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 612/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 762/sshd tcp 0 0 127..

ceph 1 daemons have recently crashed
BlockStorage(Ceph) 2021. 2. 20. 21:01

$ ceph -s cluster: id: 7025ab16-5810-4382-9318-1bd4a704ef48 health: HEALTH_WARN 1 daemons have recently crashed services: mon: 2 daemons, quorum mgmt,mon (age 3m) mgr: mgmt(active, since 47h) mds: 1 up:standby osd: 9 osds: 9 up (since 5m), 9 in (since 47h) data: pools: 1 pools, 128 pgs objects: 4 objects, 35 B usage: 9.8 GiB used, 80 GiB / 90 GiB avail pgs: 128 active+clean Ceph Crash 상태가 뜨면서 cr..

ovs-vsctl annot load glue library: libibverbs.so.1 에러메시지
OpenStack 2021. 2. 11. 14:36

ovs-vsctl 로 현재 상태를 확인하려고 했을 대, 아래와 같이 라이브러리를 찾지 못할경우, 아래 패키지를 설치 [root@network network-scripts]# ovs-vsctl show net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5) PMD: net_mlx4: cannot load glue library: libibverbs.so.1: canno..

CEPH OSD OS재설치
BlockStorage(Ceph) 2021. 1. 31. 18:51

* 상황은 아래와 같습니다.ceph osd os 디스크 쪽 i/o에러로 인해 디스크를 교체할 상황이 있었으며,ceph data는 추가디스크 쪽에 남아 있기 때문에 데이터에는 문제가 없었으나 ceph를 새로 설치해야 되는 상황이었습니다.레이드 쪽 데이터가 문제가 있었을 경우, osd를 제거 후 다시 추가해야 되겠지만, 데이터에는 문제가 없는 상황이라 기존에 있는 keyring을 갖고와서 마운트 해주는 방식으로 해주었습니다.아래는 해당 절차입니다.* ceph 는 nautilus 버전이며, os는 centos7 버전입니다.* 교체된 OSD는 2번 입니다. 1. 기본 절차os 디스크 교체하기 전 CEPH쪽에 PG를 분산시키지 않게 설정해줍니다.# ceph osd set nobackfill# ceph osd se..

[ERROR] ceph application not enabled on 1 pool(s)
BlockStorage(Ceph) 2021. 1. 30. 21:29

* Ceph VERSION : Nautilus [현재 상태]# ceph -s cluster: id: 3c17be42-37b3-4a82-9f31-45a872e42394 health: HEALTH_WARN application not enabled on 1 pool(s) services: mon: 3 daemons, quorum MGMT,MON0,MON1 (age 15h) mgr: MGMT(active, since 5w) mds: 1 up:standby osd: 4 osds: 4 up (since 15h), 4 in (since 15h) data: pools: 1 pools, 128 pgs objects: 6 objects, 51 B usage: 4.1 GiB used, 15 TiB / 15 TiB avai..

[ERROR] RuntimeError: Unable to create a new OSD id
BlockStorage(Ceph) 2021. 1. 25. 08:25

# vi /var/log/ceph/ceph-osd.log[2021-01-25 08:11:15,763][ceph_volume][ERROR ] exception caught by decoratorTraceback (most recent call last): File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 150, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python2...

[ERROR] missing required protocol features
BlockStorage(Ceph) 2021. 1. 15. 15:05

Client 쪽에서 블록스토리지로 접근할때, 아래와 같이 에러메시지가 뜨고 있습니다.[root@localhost ~]# rbd --mon_host xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx --conf /dev/null --keyring /dev/null --name client.client.keyring --key xxxxxxxxxxxxxxxxxxxxxxx --pool test_pool map test.imgrbd: sysfs write failedIn some cases useful info is found in syslog - try "dmesg | tail" or so.rbd: map failed: (5) Input/output error # dmesg[..

/var/run/crond.pid, otherpid may be 26272: 자원이 일시적으로 사용 불가능함
LINUX/ERROR 2020. 12. 10. 15:30

아래와 같이 에러메시지가 뜨면서 실행이 되지 않을때, Dec 3 09:44:22 localhost crond: crond: can't lock /var/run/crond.pid, otherpid may be 26272: 자원이 일시적으로 사용 불가능함 Dec 3 09:44:22 localhost systemd: crond.service: main process exited, code=exited, status=1/FAILURE Dec 3 09:44:22 localhost systemd: Unit crond.service entered failed state. Dec 3 09:44:22 localhost systemd: crond.service failed. [root@localhost ~]# system..