将设为首页浏览此站
开启辅助访问 天气与日历 收藏本站联系我们切换到窄版

易陆发现论坛

 找回密码
 开始注册
查看: 308|回复: 2
收起左侧

手动方式部署ceph集群以及添加osd

[复制链接]
发表于 2022-7-19 11:20:00 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有帐号?开始注册

x
一、准备
前一篇点击打开链接只部署了一个单机集群。在这一篇里,手动部署一个多机集群:mycluster。我们有三台机器nod1,node2和node3;其中node1可以免密ssh/scp任意其他两台机器。我们的所有工作都在node1上完成。
准备工作包括在各个机器上安装ceph rpm包(见前一篇第1节点击打开链接),并在各个机器上修改下列文件:
/usr/lib/systemd/system/ceph-mon@.service /usr/lib/systemd/system/ceph-osd@.service /usr/lib/systemd/system/ceph-mds@.service /usr/lib/systemd/system/ceph-mgr@.service /usr/lib/systemd/system/ceph-radosgw@.service
修改:
Environment=CLUSTER=ceph <--- 改成CLUSTER=mycluster ExecStart=/usr/bin/... --id %i --setuser ceph --setgroup ceph <--- 删掉--setuser ceph --setgroup ceph二、创建工作目录
在node1创建一个工作目录,后续所有工作都在node1上的这个工作目录中完成;
mkdir /tmp/mk-ceph-cluster cd /tmp/mk-ceph-cluster三、创建配置文件vim mycluster.conf [global] cluster = mycluster fsid = 116d4de8-fd14-491f-811f-c1bdd8fac141 public network = 192.168.100.0/24 cluster network = 192.168.73.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx osd pool default size = 3 osd pool default min size = 2 osd pool default pg num = 128 osd pool default pgp num = 128 osd pool default crush rule = 0 osd crush chooseleaf type = 1 admin socket = /var/run/ceph/$cluster-$name.asock pid file = /var/run/ceph/$cluster-$name.pid log file = /var/log/ceph/$cluster-$name.log log to syslog = false max open files = 131072 ms bind ipv6 = false [mon] mon initial members = node1,node2,node3 mon host = 192.168.100.131:6789,192.168.100.132:6789,192.168.100.133:6789 ;Yuanguo: the default value of {mon data} is /var/lib/ceph/mon/$cluster-$id, ; we overwrite it. mon data = /var/lib/ceph/mon/$cluster-$name mon clock drift allowed = 10 mon clock drift warn backoff = 30 mon osd full ratio = .95 mon osd nearfull ratio = .85 mon osd down out interval = 600 mon osd report timeout = 300 debug ms = 20 debug mon = 20 debug paxos = 20 debug auth = 20 [mon.node1] host = node1 mon addr = 192.168.100.131:6789 [mon.node2] host = node2 mon addr = 192.168.100.132:6789 [mon.node3] host = node3 mon addr = 192.168.100.133:6789 [mgr] ;Yuanguo: the default value of {mgr data} is /var/lib/ceph/mgr/$cluster-$id, ; we overwrite it. mgr data = /var/lib/ceph/mgr/$cluster-$name [osd] ;Yuanguo: we wish to overwrite {osd data}, but it seems that 'ceph-disk' forces ; to use the default value, so keep the default now; maybe in later versions ; of ceph the limitation will be eliminated. osd data = /var/lib/ceph/osd/$cluster-$id osd recovery max active = 3 osd max backfills = 5 osd max scrubs = 2 osd mkfs type = xfs osd mkfs options xfs = -f -i size=1024 osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog filestore max sync interval = 5 osd op threads = 2 debug ms = 100 debug osd = 100
需要说明的是,在这个配置文件中,我们覆盖了一些默认值,比如:{mon data}和{mgr data},但是没有覆盖{osd data},因为ceph-disk貌似强制使用默认值。另外,pid, sock文件被放置在/var/run/ceph/中,以$cluster-$name命名;log文件放置在/var/log/ceph/中,也是以$cluster-$name命名。这些都可以覆盖。
四、生成keyring
在单机部署中点击打开链接,我们说过,有两种操作集群中user及其权限的方式,这里我们使用第一种:先生成keyring文件,然后在创建集群时带入使之生效。
ceph-authtool --create-keyring mycluster.keyring --gen-key -n mon. --cap mon 'allow *'
0 ^; d0 J% t; I2 x. uceph-authtool --create-keyring mycluster.client.admin.keyring --gen-key -n client.admin --set-uallow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' : f+ n, p0 y" r
ceph-authtool --create-keyring mycluster.client.bootstrap-osd.keyring --gen-key -n client.bootstrap-osd --cap mon 'allow profile bootstrap-osd'
0 O: T; [5 e$ A0 D1 n9 Cceph-authtool --create-keyring mycluster.mgr.node1.keyring --gen-key -n mgr.node1 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
1 g  B' A% @+ _, oceph-authtool --create-keyring mycluster.mgr.node2.keyring --gen-key -n mgr.node2 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
7 w; j9 \  W" q, W! Aceph-authtool --create-keyring mycluster.mgr.node3.keyring --gen-key -n mgr.node3 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'   _$ V- a# x; T* r) \8 H# k
ceph-authtool mycluster.keyring --import-keyring mycluster.client.admin.keyring   W% E2 i0 ~% r% B
ceph-authtool mycluster.keyring --import-keyring mycluster.client.bootstrap-osd.keyring * f& }/ @: v1 f" X- {% P$ H
ceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node1.keyring
# u( M+ S" Q4 |: T  I) Bceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node2.keyring
6 j# [  b8 ~- S# f6 C% ]9 eceph-authtool mycluster.keyring --import-keyring mycluster.mgr.node3.keyring
# }8 p7 `2 o) M2 hcat mycluster.keyring [mon.] key = AQA525NZsY73ERAAIM1J6wSxglBNma3XAdEcVg== caps mon = "allow *" [client.admin] key = AQBJ25NZznIpEBAAlCdCy+OyUIvxtNq+1DSLqg== auid = 0 caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *" [client.bootstrap-osd] key = AQBW25NZtl/RBxAACGWafYy1gPWEmx9geCLi6w== caps mon = "allow profile bootstrap-osd" [mgr.node1] key = AQBb25NZ1mIeFhAA/PmRHFY6OgnAMXL1/8pSxw== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *" [mgr.node2] key = AQBg25NZJ6jyHxAAf2GfBAG5tuNwf9YjkhhEWA== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *" [mgr.node3] key = AQBl25NZ7h6CJRAAaFiea7hiTrQNVoZysA7n/g== caps mds = "allow *" caps mon = "allow profile mgr" caps osd = "allow *"五、生成monmap
生成monmap并添加3个monitor
monmaptool --create --add node1 192.168.100.131:6789 --add node2 192.168.100.132:6789 --add node3 192.168.100.133:6789 --fsid 116d4de8-fd14-491f-811f-c1bdd8fac141 8 P6 s0 `" m3 z$ }* i: S! G
monmap [plain] view plain copy monmaptool --print monmap monmaptool: monmap file monmap epoch 0 fsid 116d4de8-fd14-491f-811f-c1bdd8fac141 last_changed 2017-08-16 05:45:37.851899 created 2017-08-16 05:45:37.851899 0: 192.168.100.131:6789/0 mon.node1 1: 192.168.100.132:6789/0 mon.node2 2: 192.168.100.133:6789/0 mon.node3六、分发配置文件
keyring和monmap
把第2、3和4步中生成的配置文件,keyring,monmap分发到各个机器。由于mycluster.mgr.nodeX.keyring暂时使用不到,先不分发它们(见第8节)。
cp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap /etc/ceph scp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap node2:/etc/ceph scp mycluster.client.admin.keyring mycluster.client.bootstrap-osd.keyring mycluster.keyring mycluster.conf monmap node3:/etc/ceph七、创建集群1、创建{mon data}目录mkdir /var/lib/ceph/mon/mycluster-mon.node1 4 k* R0 b/ A/ X9 j# _
ssh node2 mkdir /var/lib/ceph/mon/mycluster-mon.node2
  |+ t+ j' `: ^6 W# j) [ssh node3 mkdir /var/lib/ceph/mon/mycluster-mon.node3
注意,在配置文件mycluster.conf中,我们把{mon data}设置为/var/lib/ceph/mon/$cluster-$name,而不是默认的/var/lib/ceph/mon/$cluster-$id;/ Z8 ~4 ^& q; \
$cluster-$name展开为mycluster-mon.node1(23);
; Q; Y/ Z2 W8 I4 s1 V% v2 D8 p# H默认的$cluster-$id展开为mycluster-node1(23);
2、初始化monitorceph-mon --cluster mycluster --mkfs -i node1 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring % u% p3 s9 _- M  \" w
ssh node2 ceph-mon --cluster mycluster --mkfs -i node2 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring . ]2 N% p$ L3 R4 V9 y- Z6 ]
ssh node3 ceph-mon --cluster mycluster --mkfs -i node3 --monmap /etc/ceph/monmap --keyring /etc/ceph/mycluster.keyring
- V+ c$ j# }1 J" a8 G
注意,在配置文件mycluster.conf,我们把{mon data}设置为/var/lib/ceph/mon/$cluster-$name,展开为/var/lib/ceph/mon/mycluster-mon.node1(23)。ceph-mon会
+ G+ K) q! e3 Z. |1 L% J根据–cluster mycluster找到配置文件mycluster.conf,并解析出{mon data},然后在那个目录下进行初始化。
3、touch donetouch /var/lib/ceph/mon/mycluster-mon.node1/done ; _* h6 d  }- ~9 s! `
ssh node2 touch /var/lib/ceph/mon/mycluster-mon.node2/done 4 y6 s- d( f2 W8 }
ssh node3 touch /var/lib/ceph/mon/mycluster-mon.node3/done4、启动monitorssystemctl start ceph-mon@node10 Q0 |  A% v' b4 _# x
ssh node2 systemctl start [url=mailto:ceph-mon@node2]ceph-mon@node2

1 g: @& N$ g; {, B/ }3 y7 A& Cssh node3 systemctl start ceph-mon@node3[/url]
! m: M( ^' U4 p- O1 m; @5、检查机器状态ceph --cluster mycluster -s
/ B4 Q2 y- i7 Xcluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs:八、添加osd
每台集群都有一个/dev/sdb,我们把它们作为osd。
1、删除它们的分区2、prepareceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdb ssh node2 ceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdb ssh node3 ceph-disk prepare --cluster mycluster --cluster-uuid 116d4de8-fd14-491f-811f-c1bdd8fac141 /dev/sdb 注意:prepare node3:/dev/sdb时,我们没有加选项:--bluestore --block.db /dev/sdb --block.wal /dev/sdb;后面我们会看它和其他两个有什么不同。3、activateceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring ssh node2 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring ssh node3 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring
注意:ceph-disk好像有两个问题:
  • 前面说过,它不使用自定义的{osd data},而强制使用默认值 /var/lib/ceph/osd/$cluster-$id
    ' J2 Q1 a% B+ D' K  s" Q0 ~/ U
  • 好像不能为一个磁盘指定osd id,而只能依赖它自动生成。虽然ceph-disk prepare有一个选项–osd-id,但是ceph-disk activate并不使用它而是自己生成。当不匹配时,会出现 如下错误:

    ' p' m0 m9 [6 y& k+ C) ^5 [
# ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/mycluster.client.bootstrap-osd.keyring command_with_stdin: Error EEXIST: entity osd.0 exists but key does not match mount_activate: Failed to activate '['ceph', '--cluster', 'mycluster', '--name', 'client.bootstrap-osd', '--keyring', '/etc/ceph/mycluster.client.bootstrap-osd.keyring', '-i', '-', 'osd', 'new', u'ca8aac6a-b442-4b07-8fa6-62ac93b7cd29']' failed with status code 17
从 ‘-i’, ‘-‘可以看出,它只能自动生成osd id;
4、检查osd
在ceph-disk prepare时,node1:/dev/sdb和node2:/dev/sdb一样,都有–bluestore –block.db /dev/sdb –block.wal选项;node3:/dev/sdb不同,没有加这些选项。我们看看有什么不同。
4.1 node1
mount | grep sdb /dev/sdb1 on /var/lib/ceph/osd/mycluster-0 type xfs (rw,noatime,seclabel,attr2,inode64,noquota) ls /var/lib/ceph/osd/mycluster-0/ activate.monmap block block.db_uuid block.wal bluefs fsid kv_backend mkfs_done systemd whoami active block.db block_uuid block.wal_uuid ceph_fsid keyring magic ready type ls -l /var/lib/ceph/osd/mycluster-0/block lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block -> /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 ls -l /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/a12dd642-b64c-4fef-b9e6-0b45cff40fa9 -> ../../sdb2 blkid /dev/sdb2 /dev/sdb2: PARTLABEL="ceph block" PARTUU cat /var/lib/ceph/osd/mycluster-0/block_uuid a12dd642-b64c-4fef-b9e6-0b45cff40fa9 ls -l /var/lib/ceph/osd/mycluster-0/block.db lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block.db -> /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 ls -l /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/1c107775-45e6-4b79-8a2f-1592f5cb03f2 -> ../../sdb3 blkid /dev/sdb3 /dev/sdb3: PARTLABEL="ceph block.db" PARTUU cat /var/lib/ceph/osd/mycluster-0/block.db_uuid 1c107775-45e6-4b79-8a2f-1592f5cb03f2 ls -l /var/lib/ceph/osd/mycluster-0/block.wal lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:52 /var/lib/ceph/osd/mycluster-0/block.wal -> /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f ls -l /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f lrwxrwxrwx. 1 root root 10 Aug 16 05:55 /dev/disk/by-partuuid/76055101-b892-4da9-b80a-c1920f24183f -> ../../sdb4 blkid /dev/sdb4 /dev/sdb4: PARTLABEL="ceph block.wal" PARTUU cat /var/lib/ceph/osd/mycluster-0/block.wal_uuid 76055101-b892-4da9-b80a-c1920f24183f
可见,node1(node2)上,/dev/sdb被分为4个分区:
  • /dev/sdb1: metadata
  • /dev/sdb2:the main block device
  • /dev/sdb3: db
  • /dev/sdb4: wal) H, F" ]7 Q: P& v2 G" u: h
具体见:ceph-disk prepare –help
4.2 node3
mount | grep sdb /dev/sdb1 on /var/lib/ceph/osd/mycluster-2 type xfs (rw,noatime,seclabel,attr2,inode64,noquota) ls /var/lib/ceph/osd/mycluster-2 activate.monmap active block block_uuid bluefs ceph_fsid fsid keyring kv_backend magic mkfs_done ready systemd type whoami ls -l /var/lib/ceph/osd/mycluster-2/block lrwxrwxrwx. 1 ceph ceph 58 Aug 16 05:54 /var/lib/ceph/osd/mycluster-2/block -> /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb ls -l /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb lrwxrwxrwx. 1 root root 10 Aug 16 05:56 /dev/disk/by-partuuid/0a70b661-43f5-4562-83e0-cbe6bdbd31fb -> ../../sdb2 blkid /dev/sdb2 /dev/sdb2: PARTLABEL="ceph block" PARTUU cat /var/lib/ceph/osd/mycluster-2/block_uuid 0a70b661-43f5-4562-83e0-cbe6bdbd31fb
可见,在node3上,/dev/sdb被分为2个分区:
  • /dev/sdb1:metadata
  • /dev/sdb2:the main block device;db和wal也在这个分区上。/ o2 l! S+ ^6 a9 Z  X
具体见:ceph-disk prepare –help
5、检查集群状态ceph --cluster mycluster -s cluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_WARN no active mgr services: mon: 3 daemons, quorum node1,node2,node3 mgr: no daemons active osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 0 kB used, 0 kB / 0 kB avail pgs:
由于没有添加mgr,集群处于WARN状态。
九、添加mgr1、创建{mgr data}目录mkdir /var/lib/ceph/mgr/mycluster-mgr.node1 ssh node2 mkdir /var/lib/ceph/mgr/mycluster-mgr.node2 ssh node3 mkdir /var/lib/ceph/mgr/mycluster-mgr.node3
注意,和{mon data}类似,在配置文件mycluster.conf中,我们把{mgr data}设置为/var/lib/ceph/mgr/$cluster-$name,而不是默认的/var/lib/ceph/mgr/$cluster-$id。
2、分发mgr的keyringcp mycluster.mgr.node1.keyring /var/lib/ceph/mgr/mycluster-mgr.node1/keyring scp mycluster.mgr.node2.keyring node2:/var/lib/ceph/mgr/mycluster-mgr.node2/keyring scp mycluster.mgr.node3.keyring node3:/var/lib/ceph/mgr/mycluster-mgr.node3/keyring3、启动mgrsystemctl start ceph-mgr@node1 ssh node2 systemctl start ceph-mgr@node2 ssh node3 systemctl start ceph-mgr@node34、检查集群状态ceph --cluster mycluster -s cluster: id: 116d4de8-fd14-491f-811f-c1bdd8fac141 health: HEALTH_OK services: mon: 3 daemons, quorum node1,node2,node3 mgr: node1(active), standbys: node3, node2 osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 bytes usage: 5158 MB used, 113 GB / 118 GB avail pgs:
可见,添加mgr之后,集群处于OK状态。
 楼主| 发表于 2022-7-20 13:41:38 | 显示全部楼层
部署Ceph mon服务
3 }8 O% p# i) |, a- d% Z安装Ceph-mon服务程序(所有设备执行)1 b, D; ]3 G9 k9 s
yum install -y ceph-mon  Z: D- x" ~4 N- Z% W* Q
17 ]# c3 T4 ]1 j$ t: E$ E4 {
初始化Mon服务(Ceph 01执行)
; P" U9 V5 A, X- i" Y- \) y0 X生成uuid* ]( i  a; ~' M" _: R
uuidgen
. V4 R# n1 |# s, T> 9bf24809-220b-4910-b384-c1f06ea80728
/ c% z  f, N5 x3 \: Y/ b1
1 N; U" @' @( Q. t6 \7 y8 }0 s2
8 M2 |5 T9 g' ?创建Ceph配置文件6 j( J& q+ E" K. d4 N8 G
cat >> /etc/ceph/ceph.conf <<EOF; G( o( ^7 H# B# }2 U& P
[global]5 m+ B/ D- w( j& z8 S3 S. d
fsid = 9bf24809-220b-4910-b384-c1f06ea80728
7 v) ^2 e3 h5 I% W/ u( Q: {mon_initial_members = ceph01,ceph02,ceph03
$ |9 F& o: `) W3 L, umon_host = 10.40.65.156,10.40.65.175,10.40.65.129# U" B8 j% Z6 d% t  V+ J
public_network = 10.40.65.0/24" z) o) E/ C" D6 p
auth_cluster_required = cephx
5 o4 a3 d' E0 i0 U) S9 Bauth_service_required = cephx& T# `* K" h: f$ X
auth_client_required = cephx
2 h; T8 K- P# `9 J, x7 Fosd_journal_size = 1024- w& j, m: y/ @  J$ V3 M! s- h
osd_pool_default_size = 3
0 q% x; `  V0 I3 s( |' i8 f2 uosd_pool_default_min_size = 2
# ^% [( @; q. g7 s2 {8 e( nosd_pool_default_pg_num = 64  x! g6 E9 B# ^% H. R  d/ Z
osd_pool_default_pgp_num = 646 H. p: n9 ?7 U8 o1 e
osd_crush_chooseleaf_type = 15 d8 @& ?6 L' [. G# T7 n8 W8 h
EOF% T, X% Q4 U! j/ V/ k( u
1& s0 |' K0 [6 n! P7 t
2
6 u5 h) G: @6 @# f# c3
" V) S8 l( k" V8 a8 A) `4
6 W8 ]: ^. Q- X6 h57 ]9 E) W  s/ v& x0 ?
6" O1 X  E+ d, L4 m1 A
75 d% c- Q$ }8 I8 B/ i/ ^# r  \
80 g1 M8 Q6 t1 s1 G9 C7 ^8 K
9: S- `3 M2 u# ]% O  R- a* d
10
- B# `3 A0 u" M11
# @; G" _. |1 d5 y1 Y% W' S  }12. L' ?# y: c* ]  d. a( k
13
* z: H: l" S6 C" B14
! e& e4 T0 F, Q- ?15& m: w" s; H! m. [+ X
164 i1 n2 N6 N3 Y* c
创建集群Monitor密钥。6 U; c- d; X( C$ e# f- t# |& i( m. o) \
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
) Y" \$ W) u2 Z( i6 `1' q3 v: N9 G% w9 R5 R& A
创建client.admin用户、client.bootstrap-osd用户密钥,添加到集群密钥中。
9 ^5 R5 _/ q. _0 Vceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
) w$ u" A1 P3 ^; a+ Kceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring9 I  a& o6 B* n; o# K2 \2 r5 m
1
7 Y; w! E4 w- Y26 [2 ?8 F, D6 c8 D, T0 M8 j
ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'/ Z9 O7 C' }- Q) L2 ^9 _9 c& ?
ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring( X2 M5 f. I& \3 X# h/ e. s
12 L7 J! w) B+ j  h
2
; i1 ~0 c& j3 m9 W: r使用主机名、主机IP地址、FSID生成monitor map。2 K$ I4 h6 a/ k, T: e. q" s; c
monmaptool --create --add ceph01 10.40.65.156 --add ceph02 10.40.65.175 --add ceph03 10.40.65.129 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap. K( s3 g# x, P
17 U$ s9 h  p  @7 Z4 i. k( z
初始化并启动monitor服务
  n# {; `. y9 W. ?sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph01
$ Z3 K) p6 X5 a8 jchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
& S3 f( V% g  Z1 ksudo -u ceph ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring1 W/ @: O9 P) n
ls /var/lib/ceph/mon/ceph-ceph01/
, H" n$ n1 |1 Y4 o8 ~15 G7 Y' o4 Q. X* v
2
! ?; B2 {+ c" L2 g; `37 Y- p/ Q6 I5 @/ ^0 Q
4/ I* |- V8 {8 \4 @
systemctl start ceph-mon@ceph01
2 g$ g8 d, m" W' r/ V$ \2 isystemctl enable ceph-mon@ceph01
( t) Y/ }  H# V- ~( Ksystemctl status ceph-mon@ceph01
1 f+ P/ ?2 w  U* \7 v17 t7 `  t( Y  p; M, y7 A
2
- \1 i7 A& @2 i% Z$ a( T1 X( K( l35 ~" V6 q# D* N# ]" B2 y& a7 t4 E
同步配置文件、密钥、monmap到其他节点中(Ceph 01执行)
; k+ F* V* L$ R1 T" m+ F复制ceph.client.admin.keyring、client.bootstrap-osd key、ceph.mon.keyring、monitor map、ceph.conf到另外2个节点3 e1 e5 i. M, M/ D5 n! V
scp /etc/ceph/ceph.client.admin.keyring root@ceph02:/etc/ceph/* E! c2 l6 `; \) e) Y' e
scp /etc/ceph/ceph.client.admin.keyring root@ceph03:/etc/ceph/3 U& {% @/ S. K
1
% P* S1 g* B3 G. v' E20 F; o* [7 D1 w% ?( l! X3 ?9 T
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/' H) U8 c, ^0 {/ u% I% I
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph03:/var/lib/ceph/bootstrap-osd/
3 ^% V, Z/ S. p6 {8 G  \4 Z: `" I1
6 ~% b& }9 L& G2% {) l/ l4 }% T$ P% n
scp /tmp/ceph.mon.keyring root@ceph02:/tmp/
1 g3 S  x7 h1 e: U( iscp /tmp/ceph.mon.keyring root@ceph03:/tmp/
$ o  e+ @6 Q7 {0 R8 i, X, o  A1
3 T% Q' r2 U) F: j8 `8 X2
9 K0 z7 ~4 x2 B7 G4 sscp /tmp/monmap root@ceph02:/tmp/3 k) B# d. d& c6 w$ g
scp /tmp/monmap root@ceph03:/tmp/" V! S: O9 K& I7 Q8 Y7 d0 D
11 i. b* `+ S# N, ~
2
  {, s9 ?" H/ \+ L% a2 [$ A7 Gscp /etc/ceph/ceph.conf root@ceph02:/etc/ceph/
$ q( [# k; |3 s- ?scp /etc/ceph/ceph.conf root@ceph03:/etc/ceph/# J- T4 V1 R$ B1 J; u, p3 h  i
1
% N% {& f8 E2 J& J2
0 S0 M+ y2 j3 ^, v! _启动其他节点的monitor服务(Ceph 02执行)
2 x* F- Z8 }/ U# Csudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph02; i7 }2 ?# b% D% a
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap: R. G; ^0 |# o' ~' A
sudo -u ceph ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring  p+ h% D8 q6 u5 ~- P. {
ls /var/lib/ceph/mon/ceph-ceph02/
% ^* Y, C$ l1 t* `( Y5 N4 c14 s$ n$ `; @9 x' _- n, e, Q
2
* }9 F- V) V0 h/ h' W4 r3
: B8 R4 r, D9 u' H4( b9 a; Q# I) b/ b; H
systemctl start ceph-mon@ceph02
: N5 S, G( m6 k6 R0 }* usystemctl enable ceph-mon@ceph02
4 m7 U# `: J& v  ?6 |0 nsystemctl status ceph-mon@ceph02" r8 R$ L& }& c) n6 `0 [8 [
1
6 |1 ?( e: E9 k/ f21 G. D- l$ s9 n" _
37 k/ P" W/ F3 Y3 ~2 l
启动其他节点的monitor服务(Ceph 03执行): ^( Y$ ?8 }3 y5 `! ], a  @
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph03# |. _- F$ U% I
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
4 J  O  I2 t1 ^! b( Tsudo -u ceph ceph-mon --mkfs -i ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring' D9 X* G4 P9 k+ R% }% y/ I5 n
ls /var/lib/ceph/mon/ceph-ceph03/: }  V( S, N- e
1
& ?8 T1 G3 ^  L! f7 u2
* S; x6 ?1 w- y& d3* `( t7 A! |& v5 N' n; E2 I
4
8 F+ G  f* n' |2 m0 s7 J, Esystemctl start ceph-mon@ceph03, Y6 W4 G0 x; W  x$ d0 ]
systemctl enable ceph-mon@ceph03
) l6 \- B( ~6 z. l1 `& f! E- Ssystemctl status ceph-mon@ceph03
1 J  ]2 c6 i8 _- z3 }, N6 |1
4 c1 _; I6 m7 `& W2- c7 D* R  s  V- I. q: f+ D
3% i: y7 r$ ]& `. c* m! D
查看当前集群状态(任意节点执行)  {1 L9 z+ A% J5 l6 s# r
通过ceph -s命令查询集群状态,可见集群services中3个mon服务已启动。( o* S* Y7 X5 F" ^, \. d
ceph -s$ G5 M8 v6 k) {  i6 E  q, _# k
> cluster:
! Z- [& u+ `4 ]0 T2 p9 z>   id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a  
  B4 I+ L6 _% v: v>   health: HEALTH_OK1 P, s' T: T: _) ?# _" S! P
>
$ B3 N  Q  q! v$ G. b0 R> services:  - b# N  H2 T2 b5 l
>   mon:  3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)  % A: H' _# P" |8 i* n% c4 A( p
>   mgr:    + x+ T( i" Z5 [  G8 w9 _
>   osd:
/ h8 D4 F3 Q- e: S& p4 e5 ?> $ z( |# E4 b+ j2 W
> data:: Y: Z' P( G* S/ ?9 ?6 {4 `5 j
>    pools:   ) F( t- b4 P* `9 t# ~" C
>   objects:
% I; ~! w  m: L# L. p, u2 u>   usage:   7 x' J, P) ?' ~0 X
>   pgs:: u# t& F/ q) G. A9 v
15 A5 ~9 B8 m! t; ~
2% i  r) E# [7 n4 I# N/ O0 ]
3  w/ s# W: d# u' F  [! y0 A* Y9 a
4, h- x. s% }' B' g, Q; o/ |
5
6 t, X" ?" b9 x! m0 k$ y( N6& R" R& q" y0 q( F8 G/ b0 x( q8 S
7
3 b! T' X1 Y+ N8 N+ _8/ Z% `- Y$ V- F  c
9: a! }; _6 S" {0 l( f
10) {, v; J# Z" V  b3 }
11, t0 u1 Y3 d# m' v4 F
12
  G+ R8 ^& c& ]0 q* k13
. w" ~+ V# m! L" V. C8 R14" c2 Z( v$ {7 t' @, d% d
15! W7 h2 r. }: Q
部署Ceph mon服务(ceph-volume 自动化创建)) y& i+ R8 h( R; l! _
安装Ceph-osd服务程序(所有设备执行)
$ r  h% [5 q* O( B* c' Kyum install -y ceph-osd& k0 G+ a, k! W
1
9 {4 o  x' K8 ?, h! N初始化osd服务(所有设备执行)
6 t( c# x4 r5 ^" M; T' f通过fdisk等工具查看磁盘盘符,然后利用ceph-volume工具自动化创建osd服务。
( l8 j# p# r4 K" R9 h3 l5 `7 H6 _( ?4 Eceph-volume lvm create --data /dev/sda  T" U' m( w' K9 ~
ceph-volume lvm create --data /dev/sdb! e1 ^9 ], H8 s7 V1 L, a
ceph-volume lvm create --data /dev/sdc
5 A* ^  }+ i7 s* y1 ^+ g1/ F0 z/ X+ `$ b6 M
2
2 {  o% P9 U8 Y- e. u6 z9 |3
5 g1 }, a: _. v, y' ?7 X# W查看当前集群状态(任意节点执行)
) N: {, S$ y1 p: e, m& y& g通过ceph osd tree命令查询集群状态,可见集群services中所有osd服务已启动。
" R4 u3 o/ ?" f9 J1 Q) L# Gceph osd tree
+ `- G/ O  }5 c0 ]: R> ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
) ]  ]9 J# m/ d+ g> -1       16.36908 root default
3 ]7 L9 C( L) w1 g1 |> -3        5.45636     host ceph01
; \/ Q0 z9 m9 X8 G+ C/ Q! Q+ w8 Y>  0   hdd  1.81879         osd.0       up  1.00000 1.00000
- b" p/ @. X  k- L3 o+ A>  1   hdd  1.81879         osd.1       up  1.00000 1.000003 z; ?3 y) D% l( S# X! M" h
>  2   hdd  1.81879         osd.2       up  1.00000 1.00000
+ e5 i$ @( S) \3 E3 S. Y> -5        5.45636     host ceph02  q; I. `$ U# t/ [- s/ z) k
>  3   hdd  1.81879         osd.3       up  1.00000 1.00000
  m* }# f& B7 P3 L) \. ^& d$ c+ c>  4   hdd  1.81879         osd.4       up  1.00000 1.00000
2 d  L3 ]/ L: U, \) y3 }! K>  5   hdd  1.81879         osd.5       up  1.00000 1.00000
: w$ l' b  p) N$ B  S% k> -7        5.45636     host ceph03
% f. y; v# \' c>  6   hdd  1.81879         osd.6       up  1.00000 1.000001 B8 L  e  U) Q3 p
>  7   hdd  1.81879         osd.7       up  1.00000 1.00000
: o' t$ B- d5 ~% t+ g>  8   hdd  1.81879         osd.8       up  1.00000 1.00000) V$ g* M' c  ^  C
19 u8 u  u( [, E" Z/ [5 I! ]  ?
2
2 a% y7 N9 a) v3
4 W% Z) z4 F# Y) Y: Z! Y4( J/ s6 n! a9 ?. r% f- l2 y
50 O- X% w* o3 R* ]8 e
6+ e5 Q$ h! p. P4 H' j6 f* {3 X2 p
7
) j8 ]1 X; ~1 |% I# E, q; f( ]8
/ W& X8 D$ [! T& k+ q1 o. o9
: k6 p4 a5 `7 n* W0 K/ J" c% W: g- n10
$ `4 v- {$ T# ~+ W' q1 e* n0 Q  B11( r; n; c" j5 {9 y; ?. O
12
& T  _9 y- U- _) @  t4 a7 E# P) Q2 ?13. `9 m7 g" N9 T+ n+ a  D' k
14* i) _3 t. h0 d! q* d
15
+ N. f  Z: i8 }: O  A6 N2 D部署Ceph mgr服务并开启Dashboard$ j+ T: {  G4 |7 M. T" `5 K
安装Ceph-mgr服务程序(所有设备执行)
" P) J: T* ]) `$ ~$ f; j; Byum install -y ceph-mgr
9 [4 \9 K9 D3 N) R( s6 z- h! O% p1
; J  E2 C2 |& ?$ Z, ]- I' d初始化并启动主MGR服务(Ceph01执行)
; f9 [% R- J$ _mkdir -p /var/lib/ceph/mgr/ceph-ceph01
4 L: P5 T& g7 G1 l$ ]# T- c/ Kchown ceph.ceph -R /var/lib/ceph4 U# O3 ^( l# m, C
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph01.keyring --gen-key -n mgr.ceph01 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'' ~7 [- l1 x8 M+ L0 \2 i
ceph auth import -i /etc/ceph/ceph.mgr.ceph01.keyring& S" f# _3 _! G: D7 a8 {
ceph auth get-or-create mgr.ceph01 -o /var/lib/ceph/mgr/ceph-ceph01/keyring
/ U( _* U; u6 {$ p0 e  \. O1! R# b9 W8 F$ |* C/ V7 N# ?
26 W" ?6 {! n! |; l8 ?- a
3
  y( a2 t4 r# b: ]$ _2 f+ Q47 q' Y" f* _1 P: I3 r  W8 e9 ]
5' E% m# R  p3 i. `( E4 a
systemctl start ceph-mgr@ceph01
$ @5 u$ K  J7 r, n8 _! ^$ Nsystemctl enable ceph-mgr@ceph01
! g% _, w6 F! J7 @systemctl status ceph-mgr@ceph01; y6 D5 w9 s' W/ G5 C
1. N1 o# g2 G7 f9 J
2
8 v) }, u, l$ z5 Y9 _34 N. ?- K4 `& [# {) \
初始化并启动从MGR服务(Ceph02执行)
0 N  Y' o  v2 jmkdir -p /var/lib/ceph/mgr/ceph-ceph02* {5 V) E* O5 w. L! ~. ~: p2 c
chown ceph.ceph -R /var/lib/ceph* t$ R2 U) e9 o" H. s: a+ \
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph02.keyring --gen-key -n mgr.ceph02 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'% t5 D$ T; J. `: y+ q1 h
ceph auth import -i /etc/ceph/ceph.mgr.ceph02.keyring2 z5 J" X% J. l- Z4 O1 r7 N
ceph auth get-or-create mgr.ceph02 -o /var/lib/ceph/mgr/ceph-ceph02/keyring
$ X: I$ T4 m2 P1 P2 j' A, v" B1
- |& ?1 ], k: e* Q% }& m9 @2& S' ^& j+ L4 y% x
3
, b4 O0 E9 V" \5 w46 i. E. P( [5 M9 M7 y
5, ]9 g/ A1 x# B; W# X8 n3 I) Q
systemctl start ceph-mgr@ceph020 _7 y+ |+ C2 e* l
systemctl enable ceph-mgr@ceph02
$ h" v9 P+ Y3 C0 y; l  j/ Xsystemctl status ceph-mgr@ceph023 P6 }0 k: X# A6 _
1
! F$ F. a# M% @. f9 [1 ?5 ^2! j/ i! U, m) A1 p! w* q+ K; A8 r
3* T9 e: N0 T( g7 Z3 K
初始化并启动从MGR服务(Ceph03执行)
. D# {9 n& H4 K% ~mkdir -p /var/lib/ceph/mgr/ceph-ceph03/ R: B8 |7 K: j9 e& n8 P/ T
chown ceph.ceph -R /var/lib/ceph
. N  ?7 z8 _# _  H+ _" T$ Pceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph03.keyring --gen-key -n mgr.ceph03 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
0 ~# O0 k# W/ r( _ceph auth import -i /etc/ceph/ceph.mgr.ceph03.keyring
9 ~8 h: J! w: |/ w% \ceph auth get-or-create mgr.ceph03 -o /var/lib/ceph/mgr/ceph-ceph03/keyring
4 B3 R' }. c4 S; Z5 y( F+ A9 A1
5 ~9 v9 E5 P/ [: @7 @) k2
& I7 e2 I; `9 T* @5 a. c3; p( _* {! ]; D( o+ z3 F! Z4 q5 A3 b
4% F7 g4 s# G+ o$ d1 h
5
8 P' i8 ^" |, M: @$ v: h* esystemctl start ceph-mgr@ceph032 q) Y7 R" ^* f8 _3 W& l: g
systemctl enable ceph-mgr@ceph039 l  O3 f0 d) T6 [9 j+ ?
systemctl status ceph-mgr@ceph03( ]( I: ]8 J- K, S3 U0 ~2 n
1
3 Z7 l+ w; K' a' Y$ d8 q9 C6 C26 L8 F8 A- u; |. P
3
' N( W" b, |4 _, y0 f1 |0 m查看当前集群状态(任意节点执行)
$ P& H: [% v6 i; A通过ceph -s命令查询集群状态,可见集群services中3个mgr服务已启动。
/ N7 [' H" b* w; kceph -s0 ?/ d" d0 ]8 h: i
> cluster: 3 X* V" x/ g1 l1 [% B
>   id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a  
" M( C7 D% u; }. b6 A6 [4 J( g>   health: HEALTH_OK3 o! `8 x8 ~' k& ^* Z& u1 L
>
' z/ y! n  E& I1 L' ^5 j> services:  # l4 J$ z  S( ?2 p$ S  E7 h
>   mon:  3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)  
! B/ [, n/ i! Q  R. i! k( n$ `>   mgr: ceph01(active, since 3w), standbys: ceph01、ceph02   9 _* c. Q) Z; F' O' s* N4 I6 u
>   osd:
. Y! S& l" H4 K> - G! }2 z/ K+ g1 C
> data:0 V; e( n+ O% n- ~3 t
>    pools:   + q+ F/ b+ Z. W" c
>   objects: 1 P. }$ r0 c! J
>   usage:   ; {: I: I: h" O7 i( d
>   pgs:7 d4 \* k- n1 ]$ C6 k0 t
1
* h1 S0 `6 s9 G; `) R' G& n4 ~# m2& @0 }* F- ~% p; x9 r6 Q
3
0 W, q. _+ G" }0 K4
% N' a  Q4 i5 `0 a3 ]% Z) R9 i58 F6 [4 }& l+ [! Y( L- }* g
6
* W, Q! t# H7 u7
, A" t  k' H2 X5 D" w85 B2 j/ O+ Z; y9 U4 k( ^
9
. ]( f9 }/ B4 w8 y+ L105 j* f) |: c3 x6 R+ U
115 L. ]: z6 e8 A( `7 D% x) F  x# \
124 g- }( N6 f' h2 Y) S
13
: w$ o( p1 y/ }1 p4 h14" y  A4 d( r3 i& t
15, }& G. o7 ]2 J8 f, ?
使能Dashboard访问功能(任意节点执行)1 q0 P, h+ ^& {/ `5 t
开启mgr dashboard功能4 q/ }7 C' |, o- {1 P8 ?3 t
ceph mgr module enable dashboard. p4 t" \) `1 q& t2 s
1
4 b0 U/ n2 W9 ^3 Z; \; J生成并安装自签名的证书0 |7 \: B2 R- u0 q- U1 p
ceph dashboard create-self-signed-cert
. Q; O7 v2 m! O0 D1/ y/ N+ v& h9 z: Z
配置dashboard: _8 b: s0 }- L+ l- ~4 C
ceph config set mgr mgr/dashboard/server_addr 10.40.65.1486 u) s* {; `# W- W
ceph config set mgr mgr/dashboard/server_port 8080
: r  Y3 p5 d8 B' T1 L  L& Wceph config set mgr mgr/dashboard/ssl_server_port 8443
/ U/ S5 Y' U2 G/ H; b: `1
& s# b/ ^! |7 ]* p25 N9 B1 \' n: k; k" M% H+ l
3
! }1 p# C. O; [% k创建一个dashboard登录用户名密码; Z2 D( S& [1 R; ~' Z3 G7 E
echo '123456' > password.txt
9 }9 Z3 y& u6 O/ K$ P! fceph dashboard ac-user-create admin  administrator -i password.txt; u5 Q8 y/ Q5 k: G; Y1 k  [$ i+ A
1
5 V# T$ D6 o; U/ C: U7 P& w7 Z$ I; Q25 ?1 [. V8 J0 J3 v
查看服务访问方式
& P: X, U1 a; lceph mgr services
6 S2 Q+ S/ i: @! ?) c1% s* u/ b; N  @7 K- f3 b" z
通过web访问Ceph Dashboard,用户名密码为admin/1234569 m, O6 R1 v, i) o+ n
https://10.40.65.148:8443
- W/ E9 ~0 \3 f2 w6 b! ^; Y' J5 R% t8 H' \) K" n; N% t1 r3 `
 楼主| 发表于 2022-7-20 13:47:56 | 显示全部楼层
部署Ceph mon服务
5 x1 M1 F6 y( h( ~2 M2 }  d9 c安装Ceph-mon服务程序(所有设备执行)
' s4 E) a: e" @
( h1 {  t) S) H& G: E  p& ]4 vyum install -y ceph-mon
% d4 ^' c, j9 k/ l9 x0 W- F$ p1
( }; _& \+ N0 V$ o初始化Mon服务(Ceph 01执行)$ ]' z3 b/ T' s" e

* Y: V  S! |; p6 K2 \; d/ `4 j9 V4 F3 N生成uuid) D7 b& q7 m. b- v; h  G

* O4 f% u8 m4 B1 yuuidgen
. F# {* D( l% l& p# _& Z) `> 9bf24809-220b-4910-b384-c1f06ea80728
9 \7 \) C6 q" z+ ^7 u1' v) ^) ?) v6 e1 `8 C. O5 D
2
+ d6 b( S/ d$ H( l* ?& b, g创建Ceph配置文件
. f  F5 J5 i8 T; D+ Z; ^
# _5 ~& M( @% L4 t6 H" B- q& l3 Ccat >> /etc/ceph/ceph.conf <<EOF; l% g, r. O' t, w  ~& H% ?
[global]
3 _, A6 ~5 h( kfsid = 9bf24809-220b-4910-b384-c1f06ea80728
! r. i) b* k9 P6 T6 n& I- E2 y. Pmon_initial_members = ceph01,ceph02,ceph03
6 d, g  p6 A: H8 w- Qmon_host = 10.40.65.156,10.40.65.175,10.40.65.129! }8 m! z6 y1 M0 \/ V- L& c
public_network = 10.40.65.0/24
1 A1 ?# h' N6 B7 M, Nauth_cluster_required = cephx
' I) [' V/ q5 f/ y0 nauth_service_required = cephx
+ k' K1 k: r  T- U$ dauth_client_required = cephx* ~1 h: E8 ^1 v& W& g  T6 b- c5 p9 r
osd_journal_size = 1024; V; }0 t) }4 U) B: ?2 w7 J; }
osd_pool_default_size = 3% v: G* N/ u( Z+ A$ f# u& x5 [+ y
osd_pool_default_min_size = 21 L2 k0 ]! w  r
osd_pool_default_pg_num = 64
* o5 C8 u- u. E0 E8 `1 p: qosd_pool_default_pgp_num = 64
1 M# L- c2 x  X5 G' s" Zosd_crush_chooseleaf_type = 1
, m* v7 d% F( U2 e' @. y4 W+ k. t; bEOF$ K: f* Y% w6 Q2 V% g
1" X' n, A8 l% R( W' d
2# _4 a9 J0 K% H9 ~) e9 n' O
3
( h5 J4 C- ]4 J; H4; u! W, Y0 H- b/ Y
5
; Q6 W. l( c( q7 x! z, |6
5 L1 `' R+ \+ q& h, ?% a73 X, ~& [/ l' ^3 Z, }9 H/ S
8
) [/ R" u. X8 ?% }9 r$ R! I1 j91 c' p+ D1 s/ u" z1 C% p
103 {. z. V% [. w; v2 P4 n8 b
11
$ m  P8 L/ u6 G: e7 g12
0 j0 Y) q# m& s# B- g: o# G! o, Y13
5 ?$ g/ y- M& Y% g4 {, K# R14
' B; f3 h: ]; `4 ?- j+ [15! Z! C# v: |2 W9 g* |8 Y' w! G
16
! ^) b, U0 O, Q/ A! ]. z* D( j4 z创建集群Monitor密钥。2 N' i, M1 G2 n* r
  X2 m( P0 H8 Y/ ]  Y. s
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'2 l$ o4 B' ]" }: p. `
1
8 N! ^: p# u. R3 D$ d/ ^创建client.admin用户、client.bootstrap-osd用户密钥,添加到集群密钥中。
* G& M  b  y+ U4 y# i! `- l! W) T0 L- ]2 P2 a
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
( Y# `! Y5 {. \7 Mceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
4 Y1 K: m0 F2 V1
% W$ M7 A! O  J" }# z; o2
" A2 j. J# W7 {ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
! M5 l9 M" J$ f6 l# l1 |. Hceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
: Z1 j: p& b# e2 }% x+ z9 ~1
3 B9 N" ?8 {5 x$ i2
3 a0 v$ [& \( i# z使用主机名、主机IP地址、FSID生成monitor map。3 Q- F# ~7 ]/ J+ m/ q
! U' e# S) y9 O- L6 x
monmaptool --create --add ceph01 10.40.65.156 --add ceph02 10.40.65.175 --add ceph03 10.40.65.129 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap
+ ~$ l+ Y' t: a  @11 R% B2 ~8 M. t/ G
初始化并启动monitor服务1 U. N3 W; a$ n6 R
+ x, z6 A/ s7 K) r5 r
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph01
) m  ]9 M, \) }* Dchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap$ X+ |; [; i. A. c% x! @. p( M- V
sudo -u ceph ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
, i* a6 r0 Q( {. ]8 Gls /var/lib/ceph/mon/ceph-ceph01/
4 H7 v0 _. D6 s, ]5 k4 C  Z1; Q2 C! H3 |; W* g( ]+ B
2
$ @, B& {. D- S: L2 \* y31 R# \6 X, S9 f) P* N
4
0 ^& s: k8 O. U! g$ Esystemctl start ceph-mon@ceph01
" c" V0 a( H$ O1 X4 M% Ssystemctl enable ceph-mon@ceph01
( @8 G) O2 C; i7 D/ D* `systemctl status ceph-mon@ceph01+ Z: C1 i; @" _
1
- m6 j, b  B, A- T  I+ F+ s2: i* t* x! T8 Z% U( U6 q7 |
3
" R4 D4 b5 q7 J/ e3 F" \. V同步配置文件、密钥、monmap到其他节点中(Ceph 01执行)6 {: _0 r! h2 x' T) _! H
, J$ {, G. i: A9 v
复制ceph.client.admin.keyring、client.bootstrap-osd key、ceph.mon.keyring、monitor map、ceph.conf到另外2个节点
* Y, `% F5 G0 h6 j" `
3 R, m- a4 ]- H! Qscp /etc/ceph/ceph.client.admin.keyring root@ceph02:/etc/ceph/( t7 c+ D. t* p3 K! r* J4 x) `
scp /etc/ceph/ceph.client.admin.keyring root@ceph03:/etc/ceph/
4 G  n# k+ c1 v( z6 o# \1* r" m2 t& @. s6 a9 q
2
" b# r: W3 ^+ w# @: {  X! _9 Sscp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/
1 R* n- i& r+ k, [, H2 bscp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph03:/var/lib/ceph/bootstrap-osd/$ ~: u1 r% m3 e1 X9 i
1- ]  k9 ?+ j6 L2 y
2
8 ]) s! K. L/ A' o. m( uscp /tmp/ceph.mon.keyring root@ceph02:/tmp/
$ Q+ K7 ~3 `4 x0 Iscp /tmp/ceph.mon.keyring root@ceph03:/tmp/
% x: z0 P% L8 z1
. X1 Q0 r3 B9 U/ S2
7 a/ v8 `, {7 b  l1 c  @- bscp /tmp/monmap root@ceph02:/tmp/, T  O1 B- H- b+ y6 n2 H
scp /tmp/monmap root@ceph03:/tmp/
8 m) U- ?* ^; b4 l1
, A% t6 K/ I7 o# y$ K+ Y( d: }26 J8 Z- h# Z' Q! ~# U
scp /etc/ceph/ceph.conf root@ceph02:/etc/ceph/* p4 a: h: U# v6 T
scp /etc/ceph/ceph.conf root@ceph03:/etc/ceph/
6 g; P& R( w' T/ k) R( Q10 o# ~# {' _8 [+ u; S1 a) h7 C
2
+ v. y% S  y7 t& t/ y) u3 ^8 D% v启动其他节点的monitor服务(Ceph 02执行)! t# N& @/ @2 v" I1 I

+ ~: \9 |' U6 k4 o  O8 S. {/ Esudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph02
# ^6 X& @5 f+ Fchown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap9 r+ M+ R% Z% l6 X9 n/ c- b. b3 A
sudo -u ceph ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring( K8 q8 ]5 D2 s' b9 G2 {# c8 Y
ls /var/lib/ceph/mon/ceph-ceph02/0 r, l1 ]. g" i7 R
1( M6 B6 u7 @+ Q
2% ?) @7 Z. v% C' w3 q
3
( E) ]* j  [1 E' b- j: p4
" T$ J8 c- X# w9 B% {0 Usystemctl start ceph-mon@ceph02
# _, Z* [, q' Gsystemctl enable ceph-mon@ceph022 s. Q2 {0 G& G  }7 W
systemctl status ceph-mon@ceph023 y  J% i1 f6 f. y5 u0 A) I
1
, n- B9 v$ I9 }5 L2 M  s# e4 q2 d2
' P" _. l" Q' g- n3
% u- h7 P0 s# w) k启动其他节点的monitor服务(Ceph 03执行)
" d) _9 R; X" g2 i' m! I  E
. ?  |3 Q" ^8 nsudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph03& A( @1 G0 x( d
chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap) e2 z" p* p+ D
sudo -u ceph ceph-mon --mkfs -i ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
  h9 D5 T; a4 E4 P& C" `ls /var/lib/ceph/mon/ceph-ceph03/
* O2 `  K3 {) ^' h( p- ]  ^3 [1
% T; i" t' y6 `2 Q2; H3 F. P( P) B( `: M/ |4 h
37 s! r* {; Z: H( [5 l/ X! |
4* c0 o; Z+ O/ I2 J1 }% x
systemctl start ceph-mon@ceph03- \7 A+ E2 }2 H+ Q
systemctl enable ceph-mon@ceph03
+ `7 Q0 r5 V" G  R% O1 d+ v3 \systemctl status ceph-mon@ceph03
* y1 ^( T. B+ S  D0 C! M% h1
% }2 Q7 g; J* Z7 |6 o- ]) o2
  }: F5 Y7 i9 {( |' I3
7 Q. C' d! [8 _; \/ f; n查看当前集群状态(任意节点执行)6 e  M" p1 u( Y
通过ceph -s命令查询集群状态,可见集群services中3个mon服务已启动。. U/ F/ P3 j7 r  I5 b
0 T. Z1 O/ I9 u. B  O% f
ceph -s# I5 X  a* D/ z0 Y
> cluster:
5 U. U1 g9 Q" a; Z3 N>                 id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a         " l! j" b& B# ?, M* t4 l
>                 health: HEALTH_OK
/ g- u6 \& n* k/ c2 y: q' I' t+ {>
" F# n# a; k1 x# f> services:        
8 X8 l' ]" Y- b2 \! g>                 mon:         3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)         9 v  \2 R' q4 f. {; {
>                 mgr:                               
/ e) z& A  ^$ v>                 osd: 7 n* N4 S  c' l
>
7 b0 s+ F5 C$ ^0 ?> data:
1 `! c8 E( [. e/ a+ g>                  pools:   + h7 T  O8 G+ h+ |7 ~
>                 objects:
% z' |9 x1 t7 z; t6 r>                 usage:   
5 v9 i. W+ s3 h4 Q>                 pgs:  e! r1 v  l# l) s7 T
1
+ V9 j) }2 M! k+ x" f24 z) i& f1 E8 Q9 M. M
3
7 r/ j1 L( I7 s. b* r9 R4 R4
( ?& ~# W# ]# o& u" }50 w0 J0 H; i! V4 n
6
+ S4 V( E- `, W5 G' j) C9 q+ r7
2 I5 H* o6 i6 r+ e6 P6 `! w+ L8; w9 v- Q! I2 b" E8 W! x
9
* K% q/ \+ u% `* I5 v6 j  m6 J10
) |7 P7 l& @$ ^0 L115 L( Y8 U6 o0 d, j
12
/ ~. c1 K  g, `% \8 @13( C9 o( U  G2 u8 o: j
14
2 x& k! E0 [, |; v3 H6 m15
+ n: n6 T' ^1 P# u" N7 g% ^/ k3 A/ K部署Ceph mon服务(ceph-volume 自动化创建)
7 M: l' T6 q2 J8 }6 i( s安装Ceph-osd服务程序(所有设备执行)
' H$ p, |1 T. `6 h6 m' l  k! Zyum install -y ceph-osd1 P4 b" j- l! J
1
1 \. n+ R4 A; f/ r; r' I" S9 d0 n初始化osd服务(所有设备执行)
$ {1 @/ o  `- t- D通过fdisk等工具查看磁盘盘符,然后利用ceph-volume工具自动化创建osd服务。
% {* B0 l. E# M7 F, O6 nceph-volume lvm create --data /dev/sda
6 y* Z0 B3 E1 |% nceph-volume lvm create --data /dev/sdb
, y6 x3 ^& s3 {ceph-volume lvm create --data /dev/sdc8 [1 I6 f! F) s2 r
1/ y9 E. ^( f  V3 V9 B
2
4 [& v5 Q' j0 g! O6 s' C, t; T9 v; `37 @! Z4 t& A9 P% r, Z
查看当前集群状态(任意节点执行)
9 O0 {4 W: }' f, Y3 Q* B通过ceph osd tree命令查询集群状态,可见集群services中所有osd服务已启动。
0 i7 |: [; i* J" E* q6 N0 {# U: a3 _* iceph osd tree2 _: x4 X0 n" q! Q$ i
> ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF+ G7 e1 |8 d' E  ~! B' S9 o7 q2 t
> -1       16.36908 root default  ^5 b: |3 x8 ~4 u2 V% v) v, a1 B
> -3        5.45636     host ceph01
- \  T" \: H: D) s& j>  0   hdd  1.81879         osd.0       up  1.00000 1.00000' A7 _. Z, k' C- n3 U. ^
>  1   hdd  1.81879         osd.1       up  1.00000 1.00000% c! ]) h1 r' }
>  2   hdd  1.81879         osd.2       up  1.00000 1.00000
/ I7 j+ @# v! `7 g' S> -5        5.45636     host ceph02+ h, M9 ^4 t" ?7 Z- o/ d
>  3   hdd  1.81879         osd.3       up  1.00000 1.00000. t; O/ B. M8 d
>  4   hdd  1.81879         osd.4       up  1.00000 1.00000% U  o. V3 q# ]* T+ ^/ w
>  5   hdd  1.81879         osd.5       up  1.00000 1.00000
; J6 t. r& y" x+ A! @> -7        5.45636     host ceph03
3 M/ w0 [" T5 U3 C% w5 t>  6   hdd  1.81879         osd.6       up  1.00000 1.00000  S1 k1 ?/ ]3 g. @) x
>  7   hdd  1.81879         osd.7       up  1.00000 1.00000
- ?5 o6 @9 {3 `8 k' c>  8   hdd  1.81879         osd.8       up  1.00000 1.00000# v! L: E  \$ n6 E7 W
1
1 V; _- h9 f" S+ [& N2
  L8 i1 R3 W3 z0 H  W3) v5 y; n2 a8 M. L
4
5 \. n/ P) s- J- X: i' M5
! ~1 d: ~- o$ b' N# l6
1 Z) I$ [% _" a; E7
7 S, q2 L' ~& z. }- m7 V1 P8* }- u. d+ ?" `
9
/ y5 y# n- z$ y5 O104 r% [2 h; k' F
11
  ^# Z0 k1 ^( `6 k: u# N4 l% h# }123 M3 Q. u! j8 N6 m. n
136 m1 T4 ~+ p& E/ ?+ \
14
- |( Q) v, i4 U" K7 c/ `2 `" q9 y155 H4 h  l, F$ Y1 B& R9 d9 l1 D
部署Ceph mgr服务并开启Dashboard2 z5 B: `4 V" W) E3 Q) U
安装Ceph-mgr服务程序(所有设备执行)& C& ?, s4 M& G8 A# V7 t8 }# I
yum install -y ceph-mgr1 V! b- C( w/ W; R* U( o3 K
10 w6 [: d6 f. N2 \( @
初始化并启动主MGR服务(Ceph01执行)
9 v( H& p$ R7 X5 zmkdir -p /var/lib/ceph/mgr/ceph-ceph01- H% D2 r- f& r( L8 o$ i: E
chown ceph.ceph -R /var/lib/ceph
( M$ O% g- p( Bceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph01.keyring --gen-key -n mgr.ceph01 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'- N9 j1 K- Q. U: e
ceph auth import -i /etc/ceph/ceph.mgr.ceph01.keyring( S! w( O- T! {; c6 Y
ceph auth get-or-create mgr.ceph01 -o /var/lib/ceph/mgr/ceph-ceph01/keyring8 a+ ~6 {0 u/ P
1% J6 w' K1 b' X: u/ T( ], u' y* I
2# M* X: g/ Z& l0 W  r2 q& F
3' g/ C# A: V* Z2 W3 U. Q
4" o9 h, m. f5 z* z7 q$ y, K5 D
5& j' j- m( f+ Z- e& j6 |
systemctl start ceph-mgr@ceph011 a( i8 {( Q6 _7 D9 j
systemctl enable ceph-mgr@ceph01
; Z! B' x8 ~4 @) T0 q6 i, usystemctl status ceph-mgr@ceph01
% t6 [1 Z$ H! w0 w7 c1: B2 U0 A  @/ A7 K  e: B
2, g0 l6 Z) @9 Z3 ^
35 G; f) }) K, W0 d& F
初始化并启动从MGR服务(Ceph02执行)' p4 S# ~" {/ F8 B4 c, q: J
mkdir -p /var/lib/ceph/mgr/ceph-ceph02% [, ?# M% E! ~6 T' ]
chown ceph.ceph -R /var/lib/ceph3 S0 F2 N0 ]* |9 Y  m% j
ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph02.keyring --gen-key -n mgr.ceph02 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
* f, w9 e  _9 c7 }: h1 w1 ^+ Uceph auth import -i /etc/ceph/ceph.mgr.ceph02.keyring
: `0 Y! }; C* \3 Z4 r% O# Mceph auth get-or-create mgr.ceph02 -o /var/lib/ceph/mgr/ceph-ceph02/keyring: V; T# C- u4 I! B; k4 f
1
  E0 E( K! X* d2' c' Y" N: k0 P% X' M, {( a
3/ \' ]- S4 X- Y$ M" X0 b
4+ M! ~: \' @) l& e; b
54 I2 x) N$ `3 {  j; D: o
systemctl start ceph-mgr@ceph02
  v& N: ~& A5 v- F" y% e$ Fsystemctl enable ceph-mgr@ceph021 \3 I9 W1 q/ v: B" E% u2 k& ?
systemctl status ceph-mgr@ceph02
# G& r' ^8 z+ @& Y% C$ Q2 x% z1
, j( ]% a# O) w( O2
* o* L3 h1 E  e1 X9 m3# h6 m4 P" W8 Z# u+ y* {( N6 x
初始化并启动从MGR服务(Ceph03执行)
1 i7 W2 L1 p1 ]) `! C: a+ umkdir -p /var/lib/ceph/mgr/ceph-ceph03
: K8 E4 N0 k/ T8 L6 Jchown ceph.ceph -R /var/lib/ceph
5 p) o/ }# n( N2 T" }ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph03.keyring --gen-key -n mgr.ceph03 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
: I" d2 s- h* ?$ `ceph auth import -i /etc/ceph/ceph.mgr.ceph03.keyring4 y- D* e( G# ~# [. C7 c0 r" I
ceph auth get-or-create mgr.ceph03 -o /var/lib/ceph/mgr/ceph-ceph03/keyring
- F- O) e2 h( {8 R' ]7 U' H! v1
$ @! c& ^- ^" J* F* P2/ s: Q: `" ?1 P
3( }. Z! u8 |) f( d9 N/ d
49 B/ I2 b! g: C$ r1 Q' E
5
1 g9 ~8 E. L/ @0 M4 L9 A+ I8 X! Bsystemctl start ceph-mgr@ceph035 ]3 {& Y; N) X  S( f
systemctl enable ceph-mgr@ceph03/ a$ n, X3 z9 y2 R0 f6 P' T4 N
systemctl status ceph-mgr@ceph03
$ u) ~0 `7 `& d, E& X9 W6 h" s16 g: {& z) E+ {) y4 t. \7 i
2
0 ]3 Y8 U9 `/ c& ?3, T/ Z2 M2 E# l& ]6 A/ v' D
查看当前集群状态(任意节点执行)
3 E. ?# k0 Y6 }/ m通过ceph -s命令查询集群状态,可见集群services中3个mgr服务已启动。
: S( w+ p$ I( x# Lceph -s4 R2 ~9 v# }- l% H; ?' F
> cluster:
8 k% G. x' g+ b4 x& w5 z>                 id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a        
% D" s+ x; t4 e7 S>                 health: HEALTH_OK0 }  y% O' {7 v4 G1 F
>
+ Z5 ^5 k/ {% u! b3 Z> services:         / p: i7 Q6 m/ B. W! a% G
>                 mon:         3 daemons, quorum ceph03,ceph02,ceph01 (age 12d)         8 t" E  _! Z9 R( t: h  b1 w
>                 mgr:        ceph01(active, since 3w), standbys: ceph01、ceph02                        9 R+ M/ @: z5 m/ ]+ Y
>                 osd: ' o! A/ B9 t3 B0 [, X7 ?" L+ C
>
" d, F2 O" j  c: D1 m) v> data:, I' D8 H) i3 F$ `) |
>                  pools:   5 p, B3 H$ n, b4 L- w" u- f. |
>                 objects: , F6 n4 K2 \9 K0 `1 @
>                 usage:   
- l0 O+ X3 I$ n4 [1 C5 M2 H>                 pgs:
7 p' B( j' A9 }% w& k1  f" a- J1 C/ a/ o
2
. l. W7 V  }$ p% K- u, p, A3
) W2 d1 J- |- g' p) k4
( M' q) V! p" [5
4 [2 L5 Z! \- B. E; E* }4 p64 A8 n  c! K( H7 k
7
2 R$ u+ P1 U6 M! d$ i) N8 w& G8
: K1 _# s  C$ ~1 N% y3 J# _% z93 k! N) Z! U5 j
10
" P" J& f6 n, a$ |0 a11& ]3 d4 d7 o1 b1 `# D  I
123 [# S9 ^: c9 M  F+ f2 S
134 B- b9 v. c" o& I8 D# d/ ?, H
148 V0 x4 K# ^8 W0 N; M
15
6 c- k; x4 L4 }! ?. X使能Dashboard访问功能(任意节点执行)6 T4 {! O: @1 G- P
开启mgr dashboard功能$ ^4 s8 O8 b8 u& \
ceph mgr module enable dashboard# r$ x5 ?: O* z2 O& A- f! B8 p
1
$ Y: b) J: p! w( D+ e* B  S生成并安装自签名的证书. ]7 K- C, H9 e0 w4 }" B
ceph dashboard create-self-signed-cert" W2 n1 v! t' g& H$ ^
1# J+ b( M: L: E9 e+ d
配置dashboard/ O+ d. h* v$ r9 _; p$ S
ceph config set mgr mgr/dashboard/server_addr 10.40.65.148
* H5 D( e& [0 h1 _# Eceph config set mgr mgr/dashboard/server_port 8080' O6 S/ l, N; [8 d' P
ceph config set mgr mgr/dashboard/ssl_server_port 8443
$ [* l" J0 ~' A1
4 y/ z4 J0 n/ Z4 P% W; [4 B; N2
& d8 h; ^* l! K7 p4 H3
) b8 G# {0 o' v6 c9 m创建一个dashboard登录用户名密码
1 B/ j- T" H. E" M5 Vecho '123456' > password.txt
0 @7 j; e! j2 O% Dceph dashboard ac-user-create admin  administrator -i password.txt' \1 I# W! q; N) ]
1
  T) x: e' A8 N) b) a6 s' {2% F7 p7 b6 P  v' f7 Y
查看服务访问方式
' P6 s+ s/ w* R' E; a# {0 q. ^ceph mgr services5 g* P! {. ~- D6 g
1
8 w( Y: {. X$ a4 W* }% S通过web访问Ceph Dashboard,用户名密码为admin/123456' L7 _/ Z/ v" T# ?8 O
https://10.40.65.148:8443
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

如有购买积分卡请联系497906712

QQ|返回首页|Archiver|手机版|小黑屋|易陆发现 点击这里给我发消息

GMT+8, 2023-2-9 15:13 , Processed in 0.052996 second(s), 23 queries .

Powered by LR.LINUX.cloud bbs168x X3.2 Licensed

© 2012-2022 Comsenz Inc.

快速回复 返回顶部 返回列表