Ceph分布式存储系统搭建

Ceph介绍

Ceph基础

Ceph是一个可靠地、自动重均衡、自动恢复的分布式存储系统,根据场景划分可以将Ceph分为三大块,分别是对象存储(rgw)、块设备存储( rbd)和文件系统服务(cephfs)。Ceph相比其它存储的优势点在于它不单单是存储,同时还充分利用了存储节点上的计算能力,在存储每一个数据时,都会通过计算得出该数据存储的位置,尽量将数据分布均衡,同时由于Ceph的良好设计,采用了CRUSH算法、HASH环等方法,使得它不存在传统的单点故障的问题,且随着规模的扩大性能并不会受到影响。

Ceph核心组件

Ceph的核心组件包括Ceph OSD、Ceph Monitor和Ceph MDS。

Ceph OSD:OSD的英文全称是Object Storage Device,它的主要功能是存储数据、复制数据、平衡数据、恢复数据等,与其它OSD间进行心跳检查等,并将一些变化情况上报给Ceph Monitor。一般情况下一块硬盘对应一个OSD,由OSD来对硬盘存储进行管理,当然一个分区也可以成为一个OSD。

Ceph OSD的架构实现由物理磁盘驱动器、Linux文件系统和Ceph OSD服务组成,对于Ceph OSD Deamon而言,Linux文件系统显性的支持了其拓展性,一般Linux文件系统有好几种,比如有BTRFS、XFS、Ext4等,BTRFS虽然有很多优点特性,但现在还没达到生产环境所需的稳定性,一般比较推荐使用XFS。

伴随OSD的还有一个概念叫做Journal盘,一般写数据到Ceph集群时,都是先将数据写入到Journal盘中,然后每隔一段时间比如5秒再将Journal盘中的数据刷新到文件系统中。一般为了使读写时延更小,Journal盘都是采用SSD,一般分配10G以上,当然分配多点那是更好,Ceph中引入Journal盘的概念是因为Journal允许Ceph OSD功能很快做小的写操作;一个随机写入首先写入在上一个连续类型的journal,然后刷新到文件系统,这给了文件系统足够的时间来合并写入磁盘,一般情况下使用SSD作为OSD的journal可以有效缓冲突发负载。

Ceph Monitor:由该英文名字我们可以知道它是一个监视器,负责监视Ceph集群,维护Ceph集群的健康状态,同时维护着Ceph集群中的各种Map图,比如OSD Map、Monitor Map、PG Map和CRUSH Map,这些Map统称为Cluster Map,Cluster Map是RADOS的关键数据结构,管理集群中的所有成员、关系、属性等信息以及数据的分发,比如当用户需要存储数据到Ceph集群时,OSD需要先通过Monitor获取最新的Map图,然后根据Map图和object id等计算出数据最终存储的位置。

Ceph MDS:全称是Ceph MetaData Server,主要保存的文件系统服务的元数据,但对象存储和块存储设备是不需要使用该服务的。

查看各种Map的信息可以通过如下命令:ceph osd(mon、pg) dump

规划

主机名 IP地址 角色 配置
ceph_node1 192.168.111.11/24 控制节点、mon,mgr,osd 3块10GB硬盘
ceph_node2 192.168.111.12/24 mon, mgr, osd 3块10GB硬盘
ceph_node3 192.168.111.13/24 mon, mgr, osd 3块10GB硬盘

系统初始化

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭SELinux

在三台主机上分别执行以下命令,管理SELinux

1
2
3
4
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
grep SELINUX=disabled /etc/selinux/config
setenforce 0
getenforce

配置网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#ceph_node1
nmcli connection delete ens33
nmcli connection add con-name ens33 ifname ens33 autoconnect yes type ethernet ip4 192.168.111.11/24 gw4 192.168.111.254 ipv4.dns 223.5.5.5
nmcli conn up ens33
#ceph_node2
nmcli connection delete ens33
nmcli connection add con-name ens33 ifname ens33 autoconnect yes type ethernet ip4 192.168.111.12/24 gw4 192.168.111.254 ipv4.dns 223.5.5.5
nmcli conn up ens33
#ceph_node3
nmcli connection delete ens33
nmcli connection add con-name ens33 ifname ens33 autoconnect yes type ethernet ip4 192.168.111.13/24 gw4 192.168.111.254 ipv4.dns 223.5.5.5
nmcli conn up ens33

配置主机名

1
2
3
4
5
6
7
8
#ceph_node1
hostnamectl set-hostname ceph_node1
#ceph_node2
hostnamectl set-hostname ceph_node2
#ceph_node3
hostnamectl set-hostname ceph_node3

配置hosts

配置hosts,并复制到其他两台主机

1
2
3
4
5
6
7
8
#ceph_node1
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.111.11 ceph_node1
192.168.111.12 ceph_node2
192.168.111.13 ceph_node3

scp /etc/hosts ceph_node2:/etc/

scp /etc/hosts ceph_node3:/etc/

配置互信

在ceph_node1上生成秘钥,不设置密码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@localhost ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:pU9BBVdNpHdTAXToafsymcxRpbLdCTpX/ivZZ5U18lQ root@ceph_node1
The key's randomart image is:
+---[RSA 2048]----+
| oo+++*=|
| . . ...E|
| o . oo=|
| o ..*.*+|
| S . o+X.=|
| o o.+.=o|
| . = B o|
| X o+|
| +oo|
+----[SHA256]-----+

将秘钥分别复制到ceph_node1, ceph_node2, ceph_node3上

ssh-copy-id -i /root/.ssh/id_rsa.pub ceph_node1
ssh-copy-id -i /root/.ssh/id_rsa.pub ceph_node2
ssh-copy-id -i /root/.ssh/id_rsa.pub ceph_node3

验证一下免密效果

ssh ceph_node1

ssh ceph_node2

ssh ceph_node3

配置时间服务器

在ceph_node1上配置时间服务器

安装chrony时间服务器软件

yum -y install chrony

1
2
3
4
5
6
7
8
vi /etc/chrony.conf
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
allow 192.168.111.0/24
local stratum 10

systemctl start chronyd && systemctl enable chronyd

在ceph_node2 & ceph_node3上配置

1
2
3
vi /etc/chrony.conf
server ceph_node1 iburst

查看时间同步源

chronyc sources -v

手动同步系统时钟

chronyc -a makestep

配置yum源

cd /etc/yum.repos.d/

mkdir bak

mv *.repo bak

基础yum源

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

epel源

curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

ceph源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat << EOF |tee /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOF

生成yum源缓存

yum clean all

yum makecache fast

yum list |grep ceph

部署Ceph

部署控制节点

在ceph_node1节点上安装ceph-deploy

1
yum -y install ceph-deploy ceph

在ceph_node1上创建一个cluster目录,所有命令再此目录下进行操作

1
2
[root@ceph_node1 ~]# mkdir /cluster
[root@ceph_node1 ~]# cd /cluster

将ceph_node1,ceph_node2,ceph_node3加入集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
[root@ceph_node1 cluster]# ceph-deploy new ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7ffbbbc21de8>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ffbbb39c4d0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph_node1', 'ceph_node2', 'ceph_node3']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] find the location of an executable
[ceph_node1][INFO ] Running command: /usr/sbin/ip link show
[ceph_node1][INFO ] Running command: /usr/sbin/ip addr show
[ceph_node1][DEBUG ] IP addresses found: [u'192.168.111.11']
[ceph_deploy.new][DEBUG ] Resolving host ceph_node1
[ceph_deploy.new][DEBUG ] Monitor ceph_node1 at 192.168.111.11
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph_node2][DEBUG ] connected to host: ceph_node1
[ceph_node2][INFO ] Running command: ssh -CT -o BatchMode=yes ceph_node2
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] find the location of an executable
[ceph_node2][INFO ] Running command: /usr/sbin/ip link show
[ceph_node2][INFO ] Running command: /usr/sbin/ip addr show
[ceph_node2][DEBUG ] IP addresses found: [u'192.168.111.12']
[ceph_deploy.new][DEBUG ] Resolving host ceph_node2
[ceph_deploy.new][DEBUG ] Monitor ceph_node2 at 192.168.111.12
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph_node3][DEBUG ] connected to host: ceph_node1
[ceph_node3][INFO ] Running command: ssh -CT -o BatchMode=yes ceph_node3
[ceph_node3][DEBUG ] connected to host: ceph_node3
[ceph_node3][DEBUG ] detect platform information from remote host
[ceph_node3][DEBUG ] detect machine type
[ceph_node3][DEBUG ] find the location of an executable
[ceph_node3][INFO ] Running command: /usr/sbin/ip link show
[ceph_node3][INFO ] Running command: /usr/sbin/ip addr show
[ceph_node3][DEBUG ] IP addresses found: [u'192.168.111.13']
[ceph_deploy.new][DEBUG ] Resolving host ceph_node3
[ceph_deploy.new][DEBUG ] Monitor ceph_node3 at 192.168.111.13
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph_node1', 'ceph_node2', 'ceph_node3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.111.11', '192.168.111.12', '192.168.111.13']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

输出没有报错,表示部署成功。

查看ceph版本

1
2
[root@ceph_node1 cluster]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

生成mon角色

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
[root@ceph_node1 cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8c09aa2e60>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7f8c09af7410>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph_node1 ...
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[ceph_node1][DEBUG ] determining if provided host has same hostname in remote
[ceph_node1][DEBUG ] get remote short hostname
[ceph_node1][DEBUG ] deploying mon to ceph_node1
[ceph_node1][DEBUG ] get remote short hostname
[ceph_node1][DEBUG ] remote hostname: ceph_node1
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node1][DEBUG ] create the mon path if it does not exist
[ceph_node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph_node1/done
[ceph_node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph_node1][DEBUG ] create the init path if it does not exist
[ceph_node1][INFO ] Running command: systemctl enable ceph.target
[ceph_node1][INFO ] Running command: systemctl enable ceph-mon@ceph_node1
[ceph_node1][INFO ] Running command: systemctl start ceph-mon@ceph_node1
[ceph_node1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node1.asok mon_status
[ceph_node1][DEBUG ] ********************************************************************************
[ceph_node1][DEBUG ] status for monitor: mon.ceph_node1
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "election_epoch": 6,
[ceph_node1][DEBUG ] "extra_probe_peers": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addrvec": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.12:3300",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v2"
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.12:6789",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v1"
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addrvec": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.13:3300",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v2"
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.13:6789",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v1"
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ],
[ceph_node1][DEBUG ] "feature_map": {
[ceph_node1][DEBUG ] "mon": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "features": "0x3ffddff8ffecffff",
[ceph_node1][DEBUG ] "num": 1,
[ceph_node1][DEBUG ] "release": "luminous"
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "features": {
[ceph_node1][DEBUG ] "quorum_con": "4611087854035861503",
[ceph_node1][DEBUG ] "quorum_mon": [
[ceph_node1][DEBUG ] "kraken",
[ceph_node1][DEBUG ] "luminous",
[ceph_node1][DEBUG ] "mimic",
[ceph_node1][DEBUG ] "osdmap-prune",
[ceph_node1][DEBUG ] "nautilus"
[ceph_node1][DEBUG ] ],
[ceph_node1][DEBUG ] "required_con": "2449958747315912708",
[ceph_node1][DEBUG ] "required_mon": [
[ceph_node1][DEBUG ] "kraken",
[ceph_node1][DEBUG ] "luminous",
[ceph_node1][DEBUG ] "mimic",
[ceph_node1][DEBUG ] "osdmap-prune",
[ceph_node1][DEBUG ] "nautilus"
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "monmap": {
[ceph_node1][DEBUG ] "created": "2022-02-18 13:54:10.100119",
[ceph_node1][DEBUG ] "epoch": 1,
[ceph_node1][DEBUG ] "features": {
[ceph_node1][DEBUG ] "optional": [],
[ceph_node1][DEBUG ] "persistent": [
[ceph_node1][DEBUG ] "kraken",
[ceph_node1][DEBUG ] "luminous",
[ceph_node1][DEBUG ] "mimic",
[ceph_node1][DEBUG ] "osdmap-prune",
[ceph_node1][DEBUG ] "nautilus"
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "fsid": "f7be9981-1e57-4165-989e-3c7206331a20",
[ceph_node1][DEBUG ] "min_mon_release": 14,
[ceph_node1][DEBUG ] "min_mon_release_name": "nautilus",
[ceph_node1][DEBUG ] "modified": "2022-02-18 13:54:10.100119",
[ceph_node1][DEBUG ] "mons": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.11:6789/0",
[ceph_node1][DEBUG ] "name": "ceph_node1",
[ceph_node1][DEBUG ] "public_addr": "192.168.111.11:6789/0",
[ceph_node1][DEBUG ] "public_addrs": {
[ceph_node1][DEBUG ] "addrvec": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.11:3300",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v2"
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.11:6789",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v1"
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "rank": 0
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.12:6789/0",
[ceph_node1][DEBUG ] "name": "ceph_node2",
[ceph_node1][DEBUG ] "public_addr": "192.168.111.12:6789/0",
[ceph_node1][DEBUG ] "public_addrs": {
[ceph_node1][DEBUG ] "addrvec": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.12:3300",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v2"
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.12:6789",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v1"
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "rank": 1
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.13:6789/0",
[ceph_node1][DEBUG ] "name": "ceph_node3",
[ceph_node1][DEBUG ] "public_addr": "192.168.111.13:6789/0",
[ceph_node1][DEBUG ] "public_addrs": {
[ceph_node1][DEBUG ] "addrvec": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.13:3300",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v2"
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "192.168.111.13:6789",
[ceph_node1][DEBUG ] "nonce": 0,
[ceph_node1][DEBUG ] "type": "v1"
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "rank": 2
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "name": "ceph_node1",
[ceph_node1][DEBUG ] "outside_quorum": [],
[ceph_node1][DEBUG ] "quorum": [
[ceph_node1][DEBUG ] 0,
[ceph_node1][DEBUG ] 1,
[ceph_node1][DEBUG ] 2
[ceph_node1][DEBUG ] ],
[ceph_node1][DEBUG ] "quorum_age": 34,
[ceph_node1][DEBUG ] "rank": 0,
[ceph_node1][DEBUG ] "state": "leader",
[ceph_node1][DEBUG ] "sync_provider": []
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ********************************************************************************
[ceph_node1][INFO ] monitor: mon.ceph_node1 is running
[ceph_node1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph_node2 ...
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[ceph_node2][DEBUG ] determining if provided host has same hostname in remote
[ceph_node2][DEBUG ] get remote short hostname
[ceph_node2][DEBUG ] deploying mon to ceph_node2
[ceph_node2][DEBUG ] get remote short hostname
[ceph_node2][DEBUG ] remote hostname: ceph_node2
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node2][DEBUG ] create the mon path if it does not exist
[ceph_node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph_node2/done
[ceph_node2][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph_node2][DEBUG ] create the init path if it does not exist
[ceph_node2][INFO ] Running command: systemctl enable ceph.target
[ceph_node2][INFO ] Running command: systemctl enable ceph-mon@ceph_node2
[ceph_node2][INFO ] Running command: systemctl start ceph-mon@ceph_node2
[ceph_node2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node2.asok mon_status
[ceph_node2][DEBUG ] ********************************************************************************
[ceph_node2][DEBUG ] status for monitor: mon.ceph_node2
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "election_epoch": 6,
[ceph_node2][DEBUG ] "extra_probe_peers": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addrvec": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.11:3300",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v2"
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.11:6789",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v1"
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addrvec": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.13:3300",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v2"
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.13:6789",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v1"
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ],
[ceph_node2][DEBUG ] "feature_map": {
[ceph_node2][DEBUG ] "mon": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "features": "0x3ffddff8ffecffff",
[ceph_node2][DEBUG ] "num": 1,
[ceph_node2][DEBUG ] "release": "luminous"
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] "features": {
[ceph_node2][DEBUG ] "quorum_con": "4611087854035861503",
[ceph_node2][DEBUG ] "quorum_mon": [
[ceph_node2][DEBUG ] "kraken",
[ceph_node2][DEBUG ] "luminous",
[ceph_node2][DEBUG ] "mimic",
[ceph_node2][DEBUG ] "osdmap-prune",
[ceph_node2][DEBUG ] "nautilus"
[ceph_node2][DEBUG ] ],
[ceph_node2][DEBUG ] "required_con": "2449958747315912708",
[ceph_node2][DEBUG ] "required_mon": [
[ceph_node2][DEBUG ] "kraken",
[ceph_node2][DEBUG ] "luminous",
[ceph_node2][DEBUG ] "mimic",
[ceph_node2][DEBUG ] "osdmap-prune",
[ceph_node2][DEBUG ] "nautilus"
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] "monmap": {
[ceph_node2][DEBUG ] "created": "2022-02-18 13:54:10.100119",
[ceph_node2][DEBUG ] "epoch": 1,
[ceph_node2][DEBUG ] "features": {
[ceph_node2][DEBUG ] "optional": [],
[ceph_node2][DEBUG ] "persistent": [
[ceph_node2][DEBUG ] "kraken",
[ceph_node2][DEBUG ] "luminous",
[ceph_node2][DEBUG ] "mimic",
[ceph_node2][DEBUG ] "osdmap-prune",
[ceph_node2][DEBUG ] "nautilus"
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] "fsid": "f7be9981-1e57-4165-989e-3c7206331a20",
[ceph_node2][DEBUG ] "min_mon_release": 14,
[ceph_node2][DEBUG ] "min_mon_release_name": "nautilus",
[ceph_node2][DEBUG ] "modified": "2022-02-18 13:54:10.100119",
[ceph_node2][DEBUG ] "mons": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.11:6789/0",
[ceph_node2][DEBUG ] "name": "ceph_node1",
[ceph_node2][DEBUG ] "public_addr": "192.168.111.11:6789/0",
[ceph_node2][DEBUG ] "public_addrs": {
[ceph_node2][DEBUG ] "addrvec": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.11:3300",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v2"
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.11:6789",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v1"
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] "rank": 0
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.12:6789/0",
[ceph_node2][DEBUG ] "name": "ceph_node2",
[ceph_node2][DEBUG ] "public_addr": "192.168.111.12:6789/0",
[ceph_node2][DEBUG ] "public_addrs": {
[ceph_node2][DEBUG ] "addrvec": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.12:3300",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v2"
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.12:6789",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v1"
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] "rank": 1
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.13:6789/0",
[ceph_node2][DEBUG ] "name": "ceph_node3",
[ceph_node2][DEBUG ] "public_addr": "192.168.111.13:6789/0",
[ceph_node2][DEBUG ] "public_addrs": {
[ceph_node2][DEBUG ] "addrvec": [
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.13:3300",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v2"
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] {
[ceph_node2][DEBUG ] "addr": "192.168.111.13:6789",
[ceph_node2][DEBUG ] "nonce": 0,
[ceph_node2][DEBUG ] "type": "v1"
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] "rank": 2
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ]
[ceph_node2][DEBUG ] },
[ceph_node2][DEBUG ] "name": "ceph_node2",
[ceph_node2][DEBUG ] "outside_quorum": [],
[ceph_node2][DEBUG ] "quorum": [
[ceph_node2][DEBUG ] 0,
[ceph_node2][DEBUG ] 1,
[ceph_node2][DEBUG ] 2
[ceph_node2][DEBUG ] ],
[ceph_node2][DEBUG ] "quorum_age": 37,
[ceph_node2][DEBUG ] "rank": 1,
[ceph_node2][DEBUG ] "state": "peon",
[ceph_node2][DEBUG ] "sync_provider": []
[ceph_node2][DEBUG ] }
[ceph_node2][DEBUG ] ********************************************************************************
[ceph_node2][INFO ] monitor: mon.ceph_node2 is running
[ceph_node2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph_node3 ...
[ceph_node3][DEBUG ] connected to host: ceph_node3
[ceph_node3][DEBUG ] detect platform information from remote host
[ceph_node3][DEBUG ] detect machine type
[ceph_node3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[ceph_node3][DEBUG ] determining if provided host has same hostname in remote
[ceph_node3][DEBUG ] get remote short hostname
[ceph_node3][DEBUG ] deploying mon to ceph_node3
[ceph_node3][DEBUG ] get remote short hostname
[ceph_node3][DEBUG ] remote hostname: ceph_node3
[ceph_node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node3][DEBUG ] create the mon path if it does not exist
[ceph_node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph_node3/done
[ceph_node3][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph_node3][DEBUG ] create the init path if it does not exist
[ceph_node3][INFO ] Running command: systemctl enable ceph.target
[ceph_node3][INFO ] Running command: systemctl enable ceph-mon@ceph_node3
[ceph_node3][INFO ] Running command: systemctl start ceph-mon@ceph_node3
[ceph_node3][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node3.asok mon_status
[ceph_node3][DEBUG ] ********************************************************************************
[ceph_node3][DEBUG ] status for monitor: mon.ceph_node3
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "election_epoch": 6,
[ceph_node3][DEBUG ] "extra_probe_peers": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addrvec": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.11:3300",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v2"
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.11:6789",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v1"
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addrvec": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.12:3300",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v2"
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.12:6789",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v1"
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ],
[ceph_node3][DEBUG ] "feature_map": {
[ceph_node3][DEBUG ] "mon": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "features": "0x3ffddff8ffecffff",
[ceph_node3][DEBUG ] "num": 1,
[ceph_node3][DEBUG ] "release": "luminous"
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] "features": {
[ceph_node3][DEBUG ] "quorum_con": "4611087854035861503",
[ceph_node3][DEBUG ] "quorum_mon": [
[ceph_node3][DEBUG ] "kraken",
[ceph_node3][DEBUG ] "luminous",
[ceph_node3][DEBUG ] "mimic",
[ceph_node3][DEBUG ] "osdmap-prune",
[ceph_node3][DEBUG ] "nautilus"
[ceph_node3][DEBUG ] ],
[ceph_node3][DEBUG ] "required_con": "2449958747315912708",
[ceph_node3][DEBUG ] "required_mon": [
[ceph_node3][DEBUG ] "kraken",
[ceph_node3][DEBUG ] "luminous",
[ceph_node3][DEBUG ] "mimic",
[ceph_node3][DEBUG ] "osdmap-prune",
[ceph_node3][DEBUG ] "nautilus"
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] "monmap": {
[ceph_node3][DEBUG ] "created": "2022-02-18 13:54:10.100119",
[ceph_node3][DEBUG ] "epoch": 1,
[ceph_node3][DEBUG ] "features": {
[ceph_node3][DEBUG ] "optional": [],
[ceph_node3][DEBUG ] "persistent": [
[ceph_node3][DEBUG ] "kraken",
[ceph_node3][DEBUG ] "luminous",
[ceph_node3][DEBUG ] "mimic",
[ceph_node3][DEBUG ] "osdmap-prune",
[ceph_node3][DEBUG ] "nautilus"
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] "fsid": "f7be9981-1e57-4165-989e-3c7206331a20",
[ceph_node3][DEBUG ] "min_mon_release": 14,
[ceph_node3][DEBUG ] "min_mon_release_name": "nautilus",
[ceph_node3][DEBUG ] "modified": "2022-02-18 13:54:10.100119",
[ceph_node3][DEBUG ] "mons": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.11:6789/0",
[ceph_node3][DEBUG ] "name": "ceph_node1",
[ceph_node3][DEBUG ] "public_addr": "192.168.111.11:6789/0",
[ceph_node3][DEBUG ] "public_addrs": {
[ceph_node3][DEBUG ] "addrvec": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.11:3300",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v2"
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.11:6789",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v1"
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] "rank": 0
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.12:6789/0",
[ceph_node3][DEBUG ] "name": "ceph_node2",
[ceph_node3][DEBUG ] "public_addr": "192.168.111.12:6789/0",
[ceph_node3][DEBUG ] "public_addrs": {
[ceph_node3][DEBUG ] "addrvec": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.12:3300",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v2"
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.12:6789",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v1"
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] "rank": 1
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.13:6789/0",
[ceph_node3][DEBUG ] "name": "ceph_node3",
[ceph_node3][DEBUG ] "public_addr": "192.168.111.13:6789/0",
[ceph_node3][DEBUG ] "public_addrs": {
[ceph_node3][DEBUG ] "addrvec": [
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.13:3300",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v2"
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] {
[ceph_node3][DEBUG ] "addr": "192.168.111.13:6789",
[ceph_node3][DEBUG ] "nonce": 0,
[ceph_node3][DEBUG ] "type": "v1"
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] "rank": 2
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ]
[ceph_node3][DEBUG ] },
[ceph_node3][DEBUG ] "name": "ceph_node3",
[ceph_node3][DEBUG ] "outside_quorum": [],
[ceph_node3][DEBUG ] "quorum": [
[ceph_node3][DEBUG ] 0,
[ceph_node3][DEBUG ] 1,
[ceph_node3][DEBUG ] 2
[ceph_node3][DEBUG ] ],
[ceph_node3][DEBUG ] "quorum_age": 41,
[ceph_node3][DEBUG ] "rank": 2,
[ceph_node3][DEBUG ] "state": "peon",
[ceph_node3][DEBUG ] "sync_provider": []
[ceph_node3][DEBUG ] }
[ceph_node3][DEBUG ] ********************************************************************************
[ceph_node3][INFO ] monitor: mon.ceph_node3 is running
[ceph_node3][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph_node1
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] find the location of an executable
[ceph_node1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node1.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph_node1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.ceph_node2
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] find the location of an executable
[ceph_node2][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node2.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph_node2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.ceph_node3
[ceph_node3][DEBUG ] connected to host: ceph_node3
[ceph_node3][DEBUG ] detect platform information from remote host
[ceph_node3][DEBUG ] detect machine type
[ceph_node3][DEBUG ] find the location of an executable
[ceph_node3][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node3.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph_node3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpRev3cc
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] get remote short hostname
[ceph_node1][DEBUG ] fetch remote file
[ceph_node1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph_node1.asok mon_status
[ceph_node1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph_node1/keyring auth get client.admin
[ceph_node1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph_node1/keyring auth get client.bootstrap-mds
[ceph_node1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph_node1/keyring auth get client.bootstrap-mgr
[ceph_node1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph_node1/keyring auth get client.bootstrap-osd
[ceph_node1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph_node1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpRev3cc

生成ceph admin秘钥

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@ceph_node1 cluster]# ceph-deploy admin ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8101aeb368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['ceph_node1', 'ceph_node2', 'ceph_node3']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f81025fc230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node1
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node2
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node3
[ceph_node3][DEBUG ] connected to host: ceph_node3
[ceph_node3][DEBUG ] detect platform information from remote host
[ceph_node3][DEBUG ] detect machine type
[ceph_node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

部署MGR,提供web界面管理ceph(可选安装)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[root@ceph_node1 cluster]# ceph-deploy mgr create ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [('ceph_node1', 'ceph_node1'), ('ceph_node2', 'ceph_node2'), ('ceph_node3', 'ceph_node3')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb3ff746710>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7fb3fffac140>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph_node1:ceph_node1 ceph_node2:ceph_node2 ceph_node3:ceph_node3
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph_node1
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node1][WARNIN] mgr keyring does not exist yet, creating one
[ceph_node1][DEBUG ] create a keyring file
[ceph_node1][DEBUG ] create path recursively if it doesn't exist
[ceph_node1][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph_node1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph_node1/keyring
[ceph_node1][INFO ] Running command: systemctl enable ceph-mgr@ceph_node1
[ceph_node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph_node1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph_node1][INFO ] Running command: systemctl start ceph-mgr@ceph_node1
[ceph_node1][INFO ] Running command: systemctl enable ceph.target
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph_node2
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node2][WARNIN] mgr keyring does not exist yet, creating one
[ceph_node2][DEBUG ] create a keyring file
[ceph_node2][DEBUG ] create path recursively if it doesn't exist
[ceph_node2][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph_node2 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph_node2/keyring
[ceph_node2][INFO ] Running command: systemctl enable ceph-mgr@ceph_node2
[ceph_node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph_node2.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph_node2][INFO ] Running command: systemctl start ceph-mgr@ceph_node2
[ceph_node2][INFO ] Running command: systemctl enable ceph.target
[ceph_node3][DEBUG ] connected to host: ceph_node3
[ceph_node3][DEBUG ] detect platform information from remote host
[ceph_node3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph_node3
[ceph_node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node3][WARNIN] mgr keyring does not exist yet, creating one
[ceph_node3][DEBUG ] create a keyring file
[ceph_node3][DEBUG ] create path recursively if it doesn't exist
[ceph_node3][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph_node3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph_node3/keyring
[ceph_node3][INFO ] Running command: systemctl enable ceph-mgr@ceph_node3
[ceph_node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph_node3.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph_node3][INFO ] Running command: systemctl start ceph-mgr@ceph_node3
[ceph_node3][INFO ] Running command: systemctl enable ceph.target

部署rgw

yum -y install ceph-radosgw

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@ceph_node1 cluster]# ceph-deploy rgw create ceph_node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create ceph_node1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [('ceph_node1', 'rgw.ceph_node1')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4fe2168b00>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f4fe2c35050>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph_node1:rgw.ceph_node1
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph_node1
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node1][WARNIN] rgw keyring does not exist yet, creating one
[ceph_node1][DEBUG ] create a keyring file
[ceph_node1][DEBUG ] create path recursively if it doesn't exist
[ceph_node1][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph_node1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph_node1/keyring
[ceph_node1][INFO ] Running command: systemctl enable ceph-radosgw@rgw.ceph_node1
[ceph_node1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph_node1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ceph_node1][INFO ] Running command: systemctl start ceph-radosgw@rgw.ceph_node1
[ceph_node1][INFO ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ceph_node1 and default port 7480

部署cephfs(可选)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[root@ceph_node1 cluster]# ceph-deploy mds create ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph_node1 ceph_node2 ceph_node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb889afe2d8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mds at 0x7fb889b3ced8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [('ceph_node1', 'ceph_node1'), ('ceph_node2', 'ceph_node2'), ('ceph_node3', 'ceph_node3')]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph_node1:ceph_node1 ceph_node2:ceph_node2 ceph_node3:ceph_node3
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph_node1
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node1][WARNIN] mds keyring does not exist yet, creating one
[ceph_node1][DEBUG ] create a keyring file
[ceph_node1][DEBUG ] create path if it doesn't exist
[ceph_node1][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph_node1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph_node1/keyring
[ceph_node1][INFO ] Running command: systemctl enable ceph-mds@ceph_node1
[ceph_node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph_node1.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph_node1][INFO ] Running command: systemctl start ceph-mds@ceph_node1
[ceph_node1][INFO ] Running command: systemctl enable ceph.target
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph_node2
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node2][WARNIN] mds keyring does not exist yet, creating one
[ceph_node2][DEBUG ] create a keyring file
[ceph_node2][DEBUG ] create path if it doesn't exist
[ceph_node2][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph_node2 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph_node2/keyring
[ceph_node2][INFO ] Running command: systemctl enable ceph-mds@ceph_node2
[ceph_node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph_node2.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph_node2][INFO ] Running command: systemctl start ceph-mds@ceph_node2
[ceph_node2][INFO ] Running command: systemctl enable ceph.target
[ceph_node3][DEBUG ] connected to host: ceph_node3
[ceph_node3][DEBUG ] detect platform information from remote host
[ceph_node3][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph_node3
[ceph_node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node3][WARNIN] mds keyring does not exist yet, creating one
[ceph_node3][DEBUG ] create a keyring file
[ceph_node3][DEBUG ] create path if it doesn't exist
[ceph_node3][INFO ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph_node3 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph_node3/keyring
[ceph_node3][INFO ] Running command: systemctl enable ceph-mds@ceph_node3
[ceph_node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph_node3.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph_node3][INFO ] Running command: systemctl start ceph-mds@ceph_node3
[ceph_node3][INFO ] Running command: systemctl enable ceph.target

初始化OSD

在ceph_node1上一次初始化所有磁盘

1
2
3
4
5
6
7
8
9
ceph-deploy osd create --data /dev/sdb ceph_node1
ceph-deploy osd create --data /dev/sdc ceph_node1
ceph-deploy osd create --data /dev/sdd ceph_node1
ceph-deploy osd create --data /dev/sdb ceph_node2
ceph-deploy osd create --data /dev/sdc ceph_node2
ceph-deploy osd create --data /dev/sdd ceph_node2
ceph-deploy osd create --data /dev/sdb ceph_node3
ceph-deploy osd create --data /dev/sdc ceph_node3
ceph-deploy osd create --data /dev/sdd ceph_node3

查看OSD状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ceph_node1 cluster]# ceph osd status
+----+------------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+------------+-------+-------+--------+---------+--------+---------+-----------+
| 0 | ceph_node1 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 1 | ceph_node1 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 2 | ceph_node1 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 3 | ceph_node2 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 4 | ceph_node2 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 5 | ceph_node2 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 6 | ceph_node3 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 7 | ceph_node3 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
| 8 | ceph_node3 | 1031M | 9204M | 0 | 0 | 0 | 0 | exists,up |
+----+------------+-------+-------+--------+---------+--------+---------+-----------+

查看ceph状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@ceph_node1 cluster]# ceph -s
cluster:
id: f7be9981-1e57-4165-989e-3c7206331a20
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph_node1,ceph_node2,ceph_node3 (age 21m)
mgr: ceph_node1(active, since 16m), standbys: ceph_node2, ceph_node3
mds: 3 up:standby
osd: 9 osds: 9 up (since 4m), 9 in (since 4m)
rgw: 1 daemon active (ceph_node1)
task status:
data:
pools: 4 pools, 128 pgs
objects: 187 objects, 1.2 KiB
usage: 9.1 GiB used, 81 GiB / 90 GiB avail
pgs: 128 active+clean

至此,整个ceph的部署就完成了,健康状态也是ok的。

Ceph操作使用

创建存储池

1
2
3
4
5
6
7
8
9
10
[root@ceph_node1 cluster]# ceph osd pool create mypool1 128
pool 'mypool1' created
#查看已创建的存储池信息
[root@ceph_node1 cluster]# ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
mypool1

查看存储池详细参数

1
2
3
4
5
6
[root@ceph_node1 cluster]# ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 9 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 34 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 36 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 38 flags hashpspool stripe_width 0 application rgw
pool 5 'mypool1' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 63 flags hashpspool stripe_width 0

查看存储池单个参数配置

1
2
3
4
5
6
# 查看副本数
[root@ceph_node1 cluster]# ceph osd pool get mypool1 size
size: 3
# 查看pg_num
[root@ceph_node1 cluster]# ceph osd pool get mypool1 pg_num
pg_num: 128

修改存储池参数

1
2
[root@ceph_node1 cluster]# ceph osd pool set mypool1 pg_num 64
set pool 5 pg_num to 64

创建EC池,创建EC策略

1
2
3
4
5
6
7
8
9
10
[root@ceph_node1 cluster]# ceph osd erasure-code-profile set ec001 k=3 m=2 crush-failure-domain=osd
[root@ceph_node1 cluster]# ceph osd pool create mypool2 100 erasure ec001
pool 'mypool2' created
[root@ceph_node1 cluster]# ceph osd pool ls detail
pool 1 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 9 flags hashpspool stripe_width 0 application rgw
pool 2 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 34 flags hashpspool stripe_width 0 application rgw
pool 3 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 36 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 38 flags hashpspool stripe_width 0 application rgw
pool 5 'mypool1' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 302 lfor 0/302/300 flags hashpspool stripe_width 0
pool 6 'mypool2' erasure size 5 min_size 4 crush_rule 1 object_hash rjenkins pg_num 100 pgp_num 100 autoscale_mode warn last_change 306 flags hashpspool,creating stripe_width 12288

rgw使用

设置mypool2为rgw,使用rados客户端工具测试上传下载

1
2
[root@ceph_node1 cluster]# ceph osd pool application enable mypool2 rgw
enabled application 'rgw' on pool 'mypool2'
1
2
3
4
5
6
7
8
9
10
11
# 将/etc/passwd 上到mypool2中,命名为t_pass
[root@ceph_node1 cluster]# rados -p mypool2 put t_pass /etc/passwd
# 查看mypool2 中的文件
[root@ceph_node1 cluster]# rados -p mypool2 ls
t_pass
# 将mypool2中的文件t_pass下载到/tmp/passwd
[root@ceph_node1 cluster]# rados -p mypool2 get t_pass /tmp/passwd
# 查看下载的文件内容
[root@ceph_node1 cluster]# cat /tmp/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin

rbd使用

设置mypool3为rbd,并创建卷,挂载到/mnt, 映射给业务服务器使用。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# 创建存储池mypool3
[root@ceph_node1 cluster]# ceph osd pool create mypool3 64
pool 'mypool3' created
# 设置存储池类型为rbd
[root@ceph_node1 cluster]# ceph osd pool application enable mypool3 rbd
enabled application 'rbd' on pool 'mypool3'
# 在存储池里划分一块名为disk1的磁盘,大小为1G
[root@ceph_node1 cluster]# rbd create mypool3/disk1 --size 1G
# 将创建的磁盘map成一个块设备
[root@ceph_node1 cluster]# rbd map mypool3/disk1
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable mypool3/disk1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[root@ceph_node1 cluster]# rbd feature disable mypool3/disk1 object-map fast-diff deep-flatten
[root@ceph_node1 cluster]# rbd map mypool3/disk1
/dev/rbd0
[root@ceph_node1 cluster]# ll /dev/rbd
rbd/ rbd0
[root@ceph_node1 cluster]# ll /dev/rbd0
brw-rw----. 1 root disk 252, 0 Feb 18 14:46 /dev/rbd0
# 格式化块设备
[root@ceph_node1 cluster]# mkfs.ext4 /dev/rbd0
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
# 挂载使用
[root@ceph_node1 cluster]# mount /dev/rbd0 /mnt
[root@ceph_node1 cluster]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 979M 0 979M 0% /dev
tmpfs 991M 0 991M 0% /dev/shm
tmpfs 991M 18M 973M 2% /run
tmpfs 991M 0 991M 0% /sys/fs/cgroup
/dev/mapper/centos-root 17G 1.9G 16G 11% /
/dev/sda1 1014M 138M 877M 14% /boot
tmpfs 199M 0 199M 0% /run/user/0
tmpfs 991M 52K 991M 1% /var/lib/ceph/osd/ceph-0
tmpfs 991M 52K 991M 1% /var/lib/ceph/osd/ceph-1
tmpfs 991M 52K 991M 1% /var/lib/ceph/osd/ceph-2
/dev/rbd0 976M 2.6M 907M 1% /mnt

避坑指南

Q1:

1
2
3
4
5
6
7
root@ceph_node1 cluster]# ceph-deploy new ceph_node1 ceph_node2 ceph_node3
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 18, in <module>
from ceph_deploy.cli import main
File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
import pkg_resources
ImportError: No module named pkg_resources

解决方法:yum install python2-pip

Q2:

1
2
health: HEALTH_WARN
mons are allowing insecure global_id reclaim

解决方法:ceph config set mon auth_allow_insecure_global_id_reclaim false

参考地址:https://www.bilibili.com/video/BV15K41137nJ/