博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
CEPH 添加OSD操作
阅读量:6216 次
发布时间:2019-06-21

本文共 11714 字,大约阅读时间需要 39 分钟。

[talen@ceph_admin ~]$ ceph-deploy osd prepare ceph_monitor:/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy osd prepare ceph_monitor:/osd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph_monitor', '/osd', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph_monitor:/osd:
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph_monitor
[ceph_monitor][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_monitor][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph_monitor disk /osd journal None activate False
[ceph_monitor][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /osd
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /osd
[ceph_monitor][INFO  ] checking OSD status...
[ceph_monitor][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_monitor][WARNIN] there are 5 OSDs down
[ceph_monitor][WARNIN] there are 5 OSDs out
[ceph_deploy.osd][DEBUG ] Host ceph_monitor is now ready for osd use.
[talen@ceph_admin ~]$
[talen@ceph_admin ~]$ ceph-deploy osd activate ceph_monitor:/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy osd activate ceph_monitor:/osd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph_monitor', '/osd', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph_monitor:/osd:
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] activating host ceph_monitor disk /osd
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph_monitor][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /osd
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Cluster uuid is 58514e13-d332-4a7e-9760-e3fccb9e2c76
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph_monitor][WARNIN] DEBUG:ceph-disk:OSD uuid is 86491f86-a204-4cee-acb5-9d7a7c26f784
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 86491f86-a204-4cee-acb5-9d7a7c26f784
[ceph_monitor][WARNIN] DEBUG:ceph-disk:OSD id is 7
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /osd/activate.monmap
[ceph_monitor][WARNIN] got monmap epoch 1
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 7 --monmap /osd/activate.monmap --osd-data /osd --osd-journal /osd/journal --osd-uuid 86491f86-a204-4cee-acb5-9d7a7c26f784 --keyring /osd/keyring
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.300887 7f3da9997880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.667066 7f3da9997880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.669354 7f3da9997880 -1 filestore(/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.917080 7f3da9997880 -1 created object store /osd journal /osd/journal for osd.7 fsid f35a65ad-1a6a-4e8d-8f7e-cb5f113c0a02
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.917200 7f3da9997880 -1 auth: error reading file: /osd/keyring: can't open /osd/keyring: (2) No such file or directory
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.917469 7f3da9997880 -1 created new key in keyring /osd/keyring
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.7 -i /osd/keyring osd allow * mon allow profile osd
[ceph_monitor][WARNIN] added key for osd.7
[ceph_monitor][WARNIN] DEBUG:ceph-disk:ceph osd.7 data dir is ready at /osd
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-7 -> /osd
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Starting ceph osd.7...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.7
[ceph_monitor][DEBUG ] === osd.7 ===
[ceph_monitor][WARNIN] create-or-move updating item name 'osd.7' weight 0.01 at location {host=ceph_monitor,root=default} to crush map
[ceph_monitor][DEBUG ] Starting Ceph osd.7 on ceph_monitor...
[ceph_monitor][WARNIN] Running as unit run-12576.service.
[ceph_monitor][INFO  ] checking OSD status...
[ceph_monitor][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_monitor][WARNIN] there are 5 OSDs down
[ceph_monitor][WARNIN] there are 5 OSDs out
[ceph_monitor][INFO  ] Running command: sudo systemctl enable ceph
[ceph_monitor][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[ceph_monitor][WARNIN] Executing /sbin/chkconfig ceph on
[ceph_monitor][WARNIN] The unit files have no [Install] section. They are not meant to be enabled
[ceph_monitor][WARNIN] using systemctl.
[ceph_monitor][WARNIN] Possible reasons for having this kind of units are:
[ceph_monitor][WARNIN] 1) A unit may be statically enabled by being symlinked from another unit's
[ceph_monitor][WARNIN]    .wants/ or .requires/ directory.
[ceph_monitor][WARNIN] 2) A unit's purpose may be to act as a helper for some other unit which has
[ceph_monitor][WARNIN]    a requirement dependency on it.
[ceph_monitor][WARNIN] 3) A unit may be started when needed via activation (socket, path, timer,
[ceph_monitor][WARNIN]    D-Bus, udev, scripted systemctl call, ...).
[talen@ceph_admin ~]$
[talen@ceph_admin ~]$ ceph -w
    cluster f35a65ad-1a6a-4e8d-8f7e-cb5f113c0a02
     health HEALTH_WARN
            64 pgs stale
            64 pgs stuck stale
     monmap e1: 1 mons at {ceph_monitor=10.0.2.33:6789/0}
            election epoch 1, quorum 0 ceph_monitor
     osdmap e47: 7 osds: 2 up, 2 in
      pgmap v1677: 64 pgs, 1 pools, 0 bytes data, 0 objects
            10305 MB used, 6056 MB / 16362 MB avail
                  64 stale+active+clean
2015-09-14 15:11:44.150240 mon.0 [INF] pgmap v1677: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:48.442026 mon.0 [INF] from='client.? 10.0.2.33:0/1012278' entity='client.bootstrap-osd' cmd=[{"prefix": "osd create", "uuid": "86491f86-a204-4cee-acb5-9d7a7c26f784"}]: dispatch
2015-09-14 15:22:48.552414 mon.0 [INF] from='client.? 10.0.2.33:0/1012278' entity='client.bootstrap-osd' cmd='[{"prefix": "osd create", "uuid": "86491f86-a204-4cee-acb5-9d7a7c26f784"}]': finished
2015-09-14 15:22:48.665616 mon.0 [INF] osdmap e48: 8 osds: 2 up, 2 in
2015-09-14 15:22:48.685799 mon.0 [INF] pgmap v1678: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:50.271971 mon.0 [INF] from='client.? 10.0.2.33:0/1012361' entity='client.bootstrap-osd' cmd=[{"prefix": "auth add", "entity": "osd.7", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]: dispatch
2015-09-14 15:22:50.310775 mon.0 [INF] from='client.? 10.0.2.33:0/1012361' entity='client.bootstrap-osd' cmd='[{"prefix": "auth add", "entity": "osd.7", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]': finished
2015-09-14 15:22:51.148684 mon.0 [INF] from='client.? 10.0.2.33:0/1012532' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph_monitor", "root=default"], "id": 7, "weight": 0.01}]: dispatch
2015-09-14 15:22:51.895119 mon.0 [INF] from='client.? 10.0.2.33:0/1012532' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "args": ["host=ceph_monitor", "root=default"], "id": 7, "weight": 0.01}]': finished
2015-09-14 15:22:51.936453 mon.0 [INF] osdmap e49: 8 osds: 2 up, 2 in
2015-09-14 15:22:52.015062 mon.0 [INF] pgmap v1679: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:53.077570 mon.0 [INF] osd.7 10.0.2.33:6800/12581 boot
2015-09-14 15:22:53.178958 mon.0 [INF] osdmap e50: 8 osds: 3 up, 3 in
2015-09-14 15:22:53.197124 mon.0 [INF] pgmap v1680: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:58.593083 mon.0 [INF] pgmap v1681: 64 pgs: 64 stale+active+clean; 0 bytes data, 15460 MB used, 9082 MB / 24543 MB avail

转载地址:http://ailja.baihongyu.com/

你可能感兴趣的文章
阿里如何做到在线业务百分百容器化
查看>>
死锁查看处理(三)
查看>>
rabbitmq 启动与停止
查看>>
浅谈unicode编码和utf-8编码的关系
查看>>
LinuxOS
查看>>
12月5日云栖精选夜读 | 埋在 MySQL 数据库应用中的17个关键问题!
查看>>
实现抽屉列表-微信小程序
查看>>
WPF自定义窗口最大化显示任务栏
查看>>
用 HBase 做高性能键值查询?
查看>>
基于python的Scrapy爬虫框架实战
查看>>
腾讯成为 Linux 基金会白金会员,贡献两大自研项目
查看>>
Firefox 将启用全新 logo 设计,不同图标对应不同产品线
查看>>
eclipse无法添加tomcat
查看>>
Confluence 6 识别慢性能的宏
查看>>
利用openssl进行base64的编码与解码
查看>>
【朝花夕拾】Android Log篇
查看>>
Python爬虫day2.3—python模块
查看>>
如何用动态参数取得季度数据
查看>>
Unity SceneManager场景管理Chinar详解API
查看>>
PHP学习4——面向对象
查看>>