yanyx's blog


  • Home

  • Tags

  • Archives

linux performance tools

Posted on 2019-10-14 | | Visitors:

tidb-operator

Posted on 2019-03-18 | | Visitors:

概述

tidb是一个分布式的数据库,tidb-operator可以让tidb跑在k8s集群上面。
本文主要验证tidb使用local pv方式部署在k8s集群上面。

安装

实验环境的k8s环境共有三个节点

1
2
3
4
5
[root@kube01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube01 Ready master 3d5h v1.13.4
kube02 Ready master 3d5h v1.13.4
kube03 Ready master 3d5h v1.13.4

tidb安装过程中需要用到6个pv,pd需要用到3个1G的pv,tikv需要用到3个10G的pv。
环境中有一块20G的空闲磁盘vdc,我们把vdc分成两个分区,一个2G,一个18G。

1
2
3
4
5
[root@kube01 ~]# sgdisk -n 1:0:+2G /dev/vdc
Creating new GPT entries.
The operation has completed successfully.
[root@kube01 ~]# sgdisk -n 2:0:0 /dev/vdc
The operation has completed successfully.

tidb-operator默认会把挂载在/mnt/disks/vol$i 的分区作为一个PV,所以把我们vdc1和vdc2这两个分区挂载在vol0和vol1下。
tidb推荐使用ext4文件系统

1
2
3
4
mkfs.ext4 /dev/vdc1
mkfs.ext4 /dev/vdc2
mount /dev/vdc1 /mnt/disks/vol0/
mount /dev/vdc1 /mnt/disks/vol1/

通过tidb-opertor提供的local-volumen-provisioner可以把前面的分区变成local pv

1
2
3
4
5
6
7
8
[root@kube01 local-dind]# kubectl apply -f local-volume-provisioner.yaml
storageclass.storage.k8s.io/local-storage changed
configmap/local-provisioner-config changed
daemonset.extensions/local-volume-provisioner changed
serviceaccount/local-storage-admin changed
clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding changed
clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole changed
clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding changed

通过helm安装tidb-operator

需要修改charts/tidb-operator/values.yaml文件中
kubeSchedulerImage: gcr.io/google-containers/hyperkube:v1.13.4
镜像的版本和kubelet的版本一致。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

[root@kube01 manifests]# kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/tidbclusters.pingcap.com created

[root@kube01 tidb-operator]# kubectl get customresourcedefinitions
NAME CREATED AT
tidbclusters.pingcap.com 2019-03-18T05:53:22Z


[root@kube01 tidb-operator]# helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin
NAME: tidb-operator
LAST DEPLOYED: Mon Mar 18 14:46:32 2019
NAMESPACE: tidb-admin
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
tidb-scheduler-policy 1 0s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
tidb-controller-manager-7c56fb85dd-h6k5m 0/1 ContainerCreating 0 0s
tidb-scheduler-7f8b69d57b-xx8jq 0/2 ContainerCreating 0 0s

==> v1/ServiceAccount
NAME SECRETS AGE
tidb-controller-manager 1 0s
tidb-scheduler 1 0s

==> v1beta1/ClusterRole
NAME AGE
tidb-operator:tidb-controller-manager 0s
tidb-operator:tidb-scheduler 0s

==> v1beta1/ClusterRoleBinding
NAME AGE
tidb-operator:tidb-controller-manager 0s
tidb-operator:tidb-scheduler 0s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
tidb-controller-manager 0/1 1 0 0s
tidb-scheduler 0/1 1 0 0s


NOTES:
1. Make sure tidb-operator components are running
kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator
2. Install CRD
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml
kubectl get customresourcedefinitions
3. Modify tidb-cluster/values.yaml and create a TiDB cluster by installing tidb-cluster charts
helm install tidb-cluster


[root@kube01 tidb-operator]# kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator
NAME READY STATUS RESTARTS AGE
tidb-controller-manager-7c56fb85dd-h6k5m 1/1 Running 0 5m11s
tidb-scheduler-7f8b69d57b-xx8jq 2/2 Running 0 5m11s

通过helm安装tidb-cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127

[root@kube01 tidb-operator]# helm install charts/tidb-cluster --name=tidb-cluster --namespace=tidb
NAME: tidb-cluster
LAST DEPLOYED: Mon Mar 18 14:57:35 2019
NAMESPACE: tidb
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
demo-monitor 3 0s
demo-pd 2 0s
demo-tidb 2 0s
demo-tikv 2 0s

==> v1/Job
NAME COMPLETIONS DURATION AGE
demo-monitor-configurator 0/1 0s 0s
demo-tidb-initializer 0/1 0s 0s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
demo-discovery-5468c7c556-8dcs5 0/1 ContainerCreating 0 0s
demo-monitor-84446b7957-rbdl9 0/2 Init:0/1 0 0s
demo-monitor-configurator-vz25r 0/1 ContainerCreating 0 0s
demo-tidb-initializer-dh8v2 0/1 ContainerCreating 0 0s

==> v1/Secret
NAME TYPE DATA AGE
demo-monitor Opaque 2 0s
demo-tidb Opaque 1 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-discovery ClusterIP 172.30.201.100 <none> 10261/TCP 0s
demo-grafana NodePort 172.30.122.104 <none> 3000:32242/TCP 0s
demo-prometheus NodePort 172.30.59.221 <none> 9090:32099/TCP 0s
demo-tidb NodePort 172.30.181.223 <none> 4000:30344/TCP,10080:32651/TCP 0s

==> v1/ServiceAccount
NAME SECRETS AGE
demo-discovery 1 0s
demo-monitor 1 0s

==> v1alpha1/TidbCluster
NAME AGE
demo 0s


==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
demo-discovery 0/1 1 0 0s
demo-monitor 0/1 1 0 0s

==> v1beta1/Role
NAME AGE
demo-discovery 0s
demo-monitor 0s

==> v1beta1/RoleBinding
NAME AGE
demo-discovery 0s
demo-monitor 0s


NOTES:
1. Watch tidb-cluster up and running
watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=tidb-cluster -o wide
2. List services in the tidb-cluster
kubectl get services --namespace tidb -l app.kubernetes.io/instance=tidb-cluster
3. Wait until tidb-initializer pod becomes completed
watch kubectl get po --namespace tidb -l app.kubernetes.io/component=tidb-initializer
4. Get the TiDB password
PASSWORD=$(kubectl get secret -n tidb demo-tidb -o jsonpath="{.data.password}" | base64 --decode | awk '{print $6}')
echo ${PASSWORD}
5. Access tidb-cluster using the MySQL client
kubectl port-forward -n tidb svc/demo-tidb 4000:4000 &
mysql -h 127.0.0.1 -P 4000 -u root -D test -p
6. View monitor dashboard for TiDB cluster
kubectl port-forward -n tidb svc/demo-grafana 3000:3000
Open browser at http://localhost:3000. The default username and password is admin/admin.


[root@kube01 tidb-operator]# kubectl get tidbcluster -n tidb
NAME AGE
demo 1m

[root@kube01 tidb-operator]# kubectl get statefulset -n tidb
NAME READY AGE
demo-pd 3/3 5m41s
demo-tidb 2/2 3m16s
demo-tikv 3/3 5m

[root@kube01 tidb-operator]# kubectl get service -n tidb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-discovery ClusterIP 172.30.25.248 <none> 10261/TCP 6m
demo-grafana NodePort 172.30.103.60 <none> 3000:31597/TCP 6m
demo-pd ClusterIP 172.30.122.100 <none> 2379/TCP 6m
demo-pd-peer ClusterIP None <none> 2380/TCP 6m
demo-prometheus NodePort 172.30.147.201 <none> 9090:30429/TCP 6m
demo-tidb NodePort 172.30.244.61 <none> 4000:30512/TCP,10080:31051/TCP 6m
demo-tidb-peer ClusterIP None <none> 10080/TCP 3m34s
demo-tikv-peer ClusterIP None <none> 20160/TCP 5m18s


[root@kube01 tidb-operator]# kubectl get configmap -n tidb
NAME DATA AGE
demo-monitor 3 6m20s
demo-pd 2 6m20s
demo-tidb 2 6m20s
demo-tikv 2 6m20s


[root@kube01 tidb-operator]# kubectl get pod -n tidb
NAME READY STATUS RESTARTS AGE
demo-discovery-5468c7c556-l9xl7 1/1 Running 0 6m41s
demo-monitor-84446b7957-4zsnd 2/2 Running 0 6m41s
demo-monitor-configurator-rnklq 0/1 Completed 0 6m41s
demo-pd-0 1/1 Running 0 6m40s
demo-pd-1 1/1 Running 0 6m40s
demo-pd-2 1/1 Running 1 6m40s
demo-tidb-0 1/1 Running 0 4m15s
demo-tidb-1 1/1 Running 0 4m15s
demo-tidb-initializer-nkng4 0/1 Completed 0 6m41s
demo-tikv-0 2/2 Running 0 5m59s
demo-tikv-1 2/2 Running 0 5m59s
demo-tikv-2 2/2 Running 0 5m59s

通过mysql client进行验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@kube01 tidb-operator]# kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb
Forwarding from 127.0.0.1:4000 -> 4000


[root@kube01 ~]# PASSWORD=$(kubectl get secret -n tidb demo-tidb -ojsonpath="{.data.password}" | base64 --decode | awk '{print $6}')
[root@kube01 ~]# echo ${PASSWORD}
'IwDSSjpq89'
[root@kube01 ~]# mysql -h 127.0.0.1 -P 4000 -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.10-TiDB-v2.1.4 MySQL Community Server (Apache License 2.0)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
4 rows in set (0.00 sec)

MySQL [(none)]>

参考

  • https://github.com/pingcap/tidb-operator/blob/master/docs/local-dind-tutorial.md
  • https://github.com/pingcap/tidb-operator/blob/master/docs/setup.md

Ceph对象存储使用纠删码存储池

Posted on 2019-03-13 | | Visitors:

概述

本文主要验证ceph对象存储使用纠删码的情况
本文中纠删码的配置K+M为 4+2,理论上可以容忍M个osd的故障ceph

配置

创建erasure-code-profile和crush rule

1
2
3
4
5
6
7
8
9
[root@ceph04 ~]# ceph osd erasure-code-profile set rgw_ec_profile k=4 m=2 crush-root=root_rgw plugin=isa crush-failure-domain=host
[root@ceph04 ~]# ceph osd erasure-code-profile get rgw_ec_profile
crush-device-class=
crush-failure-domain=host
crush-root=root_rgw
k=4
m=2
plugin=isa
technique=reed_sol_van
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
[root@ceph04 ~]# ceph osd crush rule create-erasure rgw_ec_rule rgw_ec_profile
created rule rgw_ec_rule at 2
[root@ceph04 ~]# ceph osd crush rule dump
[
{
"rule_id": 0,
"rule_name": "replicated_rule",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 1,
"rule_name": "rule_rgw",
"ruleset": 1,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -13,
"item_name": "root_rgw"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
},
{
"rule_id": 2,
"rule_name": "rgw_ec_rule",
"ruleset": 2,
"type": 3,
"min_size": 3,
"max_size": 6,
"steps": [
{
"op": "set_chooseleaf_tries",
"num": 5
},
{
"op": "set_choose_tries",
"num": 100
},
{
"op": "take",
"item": -13,
"item_name": "root_rgw"
},
{
"op": "chooseleaf_indep",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
]

由于实验环境只有3个节点,需要调整crush rule,先选择3个host,再在每个host选择两个osd

1
2
3
ceph osd getcrushmap -o crushmap

crushtool -d crushmap -o crushmap.txt
1
2
3
4
5
6
7
8
9
10
11
12
rule rgw_ec_rule {
id 2
type erasure
min_size 3
max_size 6
step set_chooseleaf_tries 5
step set_choose_tries 100
step take root_rgw
step choose indep 3 type host
step choose indep 2 type osd
step emit
}
1
2
crushtool -c crushmap.txt -o crushmap
ceph osd setcrushmap -i crushmap

由于环境中还没有任何数据,我们先停止rgw,然后把默认的default.rgw.buckets.data存储池删掉,再创建一个纠删码的default.rgw.buckets.data存储池

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@ceph04 ~]# ceph osd pool create default.rgw.buckets.data 64 64 erasure rgw_ec_profile rgw_ec_rule
pool 'default.rgw.buckets.data' created

[root@ceph04 ~]# ceph osd pool application enable default.rgw.buckets.data rgw
enabled application 'rgw' on pool 'default.rgw.buckets.data'

[root@ceph04 ~]# ceph -s
cluster:
id: 57df615e-6dec-4f69-84f0-72a2ba76d4d7
health: HEALTH_WARN
noout flag(s) set
too few PGs per OSD (21 < min 30)
clock skew detected on mon.ceph04

services:
mon: 3 daemons, quorum ceph06,ceph05,ceph04
mgr: ceph04(active), standbys: ceph06, ceph05
osd: 18 osds: 18 up, 18 in
flags noout

data:
pools: 1 pools, 64 pgs
objects: 0 objects, 0 bytes
usage: 19351 MB used, 339 GB / 358 GB avail
pgs: 64 active+clean

可以看到默认创建的存储池的size是k+m=6, min_size=k-m+1=5, 当存储池的当前size小于min_size的时候,pg会出现incomplete的情况,所以在还需要调整存储池的min_size为4,这样就可以容忍2个osd节点故障。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@ceph04 ~]# ceph osd pool ls detail
pool 33 'default.rgw.buckets.data' erasure size 6 min_size 5 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 466 flags hashpspool stripe_width 16384 application rgw
pool 34 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 405 flags hashpspool stripe_width 0 application rgw
pool 35 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 407 flags hashpspool stripe_width 0 application rgw
pool 36 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 409 flags hashpspool stripe_width 0 application rgw
pool 37 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 412 flags hashpspool stripe_width 0 application rgw


[root@ceph04 ~]# ceph osd pool set default.rgw.buckets.data min_size 4
set pool 33 min_size to 4
[root@ceph04 ~]# ceph osd pool ls detail
pool 33 'default.rgw.buckets.data' erasure size 6 min_size 4 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 483 flags hashpspool stripe_width 16384 application rgw
pool 34 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 405 flags hashpspool stripe_width 0 application rgw
pool 35 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 407 flags hashpspool stripe_width 0 application rgw
pool 36 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 409 flags hashpspool stripe_width 0 application rgw
pool 37 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 412 flags hashpspool stripe_width 0 application rgw

验证

创建对象存储用户,并用s3cmd进行验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@ceph04 ~]# radosgw-admin user create --uid=test --display-name=test
{
"user_id": "test",
"display_name": "test",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "test",
"access_key": "DE8EBP0W6WSO9SGYGV66",
"secret_key": "AG6ufkpWuO4pUtJLOKKimfZNvVmwVsMkeXZmmBgi"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
1
2
3
[root@ceph04 ~]# s3cmd ls s3://
[root@ceph04 ~]# s3cmd mb s3://test
Bucket 's3://test/' created

停掉2个osd,pg也没有出现incomplete的状态, 通过s3cmd也可以正常上传下载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@ceph04 ~]# systemctl stop ceph-osd@0
[root@ceph04 ~]# systemctl stop ceph-osd@1
[root@ceph04 ~]# ceph -s
cluster:
id: 57df615e-6dec-4f69-84f0-72a2ba76d4d7
health: HEALTH_WARN
noout flag(s) set
2 osds down
Degraded data redundancy: 471/3570 objects degraded (13.193%), 12 pgs degraded
too few PGs per OSD (26 < min 30)
clock skew detected on mon.ceph05, mon.ceph04

services:
mon: 3 daemons, quorum ceph06,ceph05,ceph04
mgr: ceph04(active), standbys: ceph06, ceph05
osd: 18 osds: 16 up, 18 in
flags noout
rgw: 1 daemon active

data:
pools: 5 pools, 96 pgs
objects: 1184 objects, 15056 kB
usage: 19554 MB used, 339 GB / 358 GB avail
pgs: 471/3570 objects degraded (13.193%)
43 active+undersized
41 active+clean
12 active+undersized+degraded

io:
client: 170 B/s rd, 0 B/s wr, 0 op/s rd, 0 op/s wr

参考

  • http://www.zphj1987.com/2018/06/12/ceph-erasure-default-min-size/

Rook

Posted on 2019-03-12 | | Visitors:

概述

本文主要介绍如何通过rook在k8s上部署一套ceph集群。
测试的k8s集群一共三个节点:

1
2
3
4
5
[root@kube01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube01 Ready master 3h3m v1.13.4
kube02 Ready master 175m v1.13.4
kube03 Ready master 172m v1.13.4

Rook部署

clone rook代码

1
2
3
git clone https://github.com/rook/rook.git
cd rook/
git checkout v0.9.3

通过kubectl执行rook-ceph的operator

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@kube01 rook]# kubectl apply -f cluster/examples/kubernetes/ceph/operator.yaml
namespace/rook-ceph-system created
customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
deployment.apps/rook-ceph-operator created

等待全部pod都running状态

1
2
3
4
5
6
7
8
9
[root@kube01 rook]# kubectl get pods -n rook-ceph-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rook-ceph-agent-7zgcv 1/1 Running 0 2m15s 192.168.10.181 kube01 <none> <none>
rook-ceph-agent-ww4sk 1/1 Running 0 2m15s 192.168.10.129 kube02 <none> <none>
rook-ceph-agent-xjgm4 1/1 Running 0 2m15s 192.168.10.44 kube03 <none> <none>
rook-ceph-operator-5f4ff4d57d-fm8s5 1/1 Running 0 3m41s 10.244.1.2 kube02 <none> <none>
rook-discover-2mwpn 1/1 Running 0 2m15s 10.244.0.10 kube01 <none> <none>
rook-discover-nqhr8 1/1 Running 0 2m15s 10.244.1.3 kube02 <none> <none>
rook-discover-wgpng 1/1 Running 0 2m15s 10.244.2.2 kube03 <none> <none>

ceph会使用每个节点的vdb作为osd,所以需要修改cluster/examples/kubernetes/ceph/cluster.yaml的内容

1
2
3
4
5
6
7
8
9
10
11
12
storage: # cluster level storage configuration and selection
useAllNodes: true
useAllDevices: false
deviceFilter: "^vdb"
location:
config:
# The default and recommended storeType is dynamically set to bluestore for devices and filestore for directories.
# Set the storeType explicitly only if it is required not to use the default.
storeType: bluestore
#databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger)
#journalSizeMB: "1024" # this value can be removed for environments with normal sized disks (20 GB or larger)
osdsPerDevice: "1" # this value can be overridden at the node or device level

通过kubectl部署ceph集群

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@kube01 rook]# kubectl apply -f cluster/examples/kubernetes/ceph/cluster.yaml
namespace/rook-ceph created
serviceaccount/rook-ceph-osd created
serviceaccount/rook-ceph-mgr created
role.rbac.authorization.k8s.io/rook-ceph-osd created
role.rbac.authorization.k8s.io/rook-ceph-mgr-system created
role.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created
cephcluster.ceph.rook.io/rook-ceph created

等ceph部署完成,可以看到有三个mon,一个mgr和三个osd

1
2
3
4
5
6
7
8
9
10
11
12
[root@kube01 rook]# kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-mgr-a-66db78887f-lmhcf 1/1 Running 0 4m13s
rook-ceph-mon-a-b6556df54-t7cd4 1/1 Running 0 4m53s
rook-ceph-mon-b-7f84c6d4b-nhjj6 1/1 Running 0 4m42s
rook-ceph-mon-c-868c5b476b-ghfqs 1/1 Running 0 4m32s
rook-ceph-osd-0-6fdf57bcb7-bf6f2 1/1 Running 0 54s
rook-ceph-osd-1-8c99b7447-h4mfd 1/1 Running 0 39s
rook-ceph-osd-2-66b76d6944-5df9j 1/1 Running 0 27s
rook-ceph-osd-prepare-kube01-dfrp2 0/2 Completed 0 3m50s
rook-ceph-osd-prepare-kube02-pnbsc 0/2 Completed 0 3m49s
rook-ceph-osd-prepare-kube03-4ztl9 0/2 Completed 0 3m48s

安装ceph-tool,登录到ceph-tools的pod,可以执行ceph相关的命令,查看ceph状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@kube01 rook]# kubectl apply -f cluster/examples/kubernetes/ceph/toolbox.yaml
deployment.apps/rook-ceph-tools created
[root@kube01 rook]# kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-mgr-a-66db78887f-lmhcf 1/1 Running 0 5m10s
rook-ceph-mon-a-b6556df54-t7cd4 1/1 Running 0 5m50s
rook-ceph-mon-b-7f84c6d4b-nhjj6 1/1 Running 0 5m39s
rook-ceph-mon-c-868c5b476b-ghfqs 1/1 Running 0 5m29s
rook-ceph-osd-0-6fdf57bcb7-bf6f2 1/1 Running 0 111s
rook-ceph-osd-1-8c99b7447-h4mfd 1/1 Running 0 96s
rook-ceph-osd-2-66b76d6944-5df9j 1/1 Running 0 84s
rook-ceph-osd-prepare-kube01-d2rrt 1/2 Running 0 49s
rook-ceph-osd-prepare-kube02-jxl6g 1/2 Running 0 47s
rook-ceph-osd-prepare-kube03-fz276 1/2 Running 0 45s
rook-ceph-tools-544fb656d-tddrx 1/1 Running 0 3s

登录到ceph-tools Pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@kube01 rook]# kubectl exec -it rook-ceph-tools-544fb656d-tddrx bash -n rook-ceph
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
[root@kube03 /]# ceph -s
cluster:
id: f9609ec9-62c7-4462-a4f2-35c4137c25ef
health: HEALTH_OK

services:
mon: 3 daemons, quorum c,a,b
mgr: a(active)
osd: 3 osds: 3 up, 3 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs:

[root@kube03 /]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.05846 root default
-7 0.01949 host kube01
2 hdd 0.01949 osd.2 up 1.00000 1.00000
-3 0.01949 host kube02
0 hdd 0.01949 osd.0 up 1.00000 1.00000
-5 0.01949 host kube03
1 hdd 0.01949 osd.1 up 1.00000 1.00000

部署好的ceph集群并没有rgw服务,通过下面的方式可以添加rgw服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@kube01 rook]# kubectl apply -f cluster/examples/kubernetes/ceph/object.yaml
cephobjectstore.ceph.rook.io/my-store created

[root@kube01 rook]# kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-mgr-a-66db78887f-lmhcf 1/1 Running 0 11m
rook-ceph-mon-a-b6556df54-t7cd4 1/1 Running 0 11m
rook-ceph-mon-b-7f84c6d4b-nhjj6 1/1 Running 0 11m
rook-ceph-mon-c-868c5b476b-ghfqs 1/1 Running 0 11m
rook-ceph-osd-0-f986cc57d-6xclh 1/1 Running 0 6m
rook-ceph-osd-1-765556c558-jwv2n 1/1 Running 0 5m40s
rook-ceph-osd-2-766db888c7-j7z8f 1/1 Running 0 5m48s
rook-ceph-osd-prepare-kube01-d2rrt 0/2 Completed 0 6m57s
rook-ceph-osd-prepare-kube02-jxl6g 0/2 Completed 0 6m55s
rook-ceph-osd-prepare-kube03-fz276 0/2 Completed 0 6m53s
rook-ceph-rgw-my-store-5b68744bc6-pc7g7 1/1 Running 0 21s
rook-ceph-tools-544fb656d-tddrx 1/1 Running 0 6m11s

[root@kube01 rook]# kubectl exec -it rook-ceph-tools-544fb656d-tddrx bash -n rook-ceph
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
[root@kube03 /]# ceph -s
cluster:
id: f9609ec9-62c7-4462-a4f2-35c4137c25ef
health: HEALTH_OK

services:
mon: 3 daemons, quorum c,a,b
mgr: a(active)
osd: 3 osds: 3 up, 3 in
rgw: 1 daemon active

data:
pools: 6 pools, 600 pgs
objects: 201 objects, 3.7 KiB
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs: 600 active+clean

通过下面的方式可以把rgw服务以NodePort的方式对外提供服务

1
2
3
4
5
6
7
8
9
10
11
12
[root@kube01 rook]# kubectl apply -f cluster/examples/kubernetes/ceph/rgw-external.yaml
service/rook-ceph-rgw-my-store-external created

[root@kube01 rook]# kubectl get services -n rook-ceph
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr ClusterIP 172.30.117.80 <none> 9283/TCP 12m
rook-ceph-mgr-dashboard ClusterIP 172.30.225.42 <none> 8443/TCP 12m
rook-ceph-mon-a ClusterIP 172.30.82.3 <none> 6790/TCP 13m
rook-ceph-mon-b ClusterIP 172.30.122.28 <none> 6790/TCP 13m
rook-ceph-mon-c ClusterIP 172.30.62.26 <none> 6790/TCP 13m
rook-ceph-rgw-my-store ClusterIP 172.30.107.175 <none> 80/TCP 2m47s
rook-ceph-rgw-my-store-external NodePort 172.30.131.185 <none> 80:32185/TCP 4m2s

下面的部署安装ceph mds

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[root@kube01 rook]# kubectl apply -f cluster/examples/kubernetes/ceph/filesystem.yaml
cephfilesystem.ceph.rook.io/myfs created

[root@kube01 rook]# kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-mds-myfs-a-6bbc59cbc8-fp5px 1/1 Running 0 11s
rook-ceph-mds-myfs-b-658fc8fd66-5cd9g 1/1 Running 0 11s
rook-ceph-mgr-a-66db78887f-lmhcf 1/1 Running 0 23m
rook-ceph-mon-a-b6556df54-t7cd4 1/1 Running 0 24m
rook-ceph-mon-b-7f84c6d4b-nhjj6 1/1 Running 0 24m
rook-ceph-mon-c-868c5b476b-ghfqs 1/1 Running 0 23m
rook-ceph-osd-0-f986cc57d-6xclh 1/1 Running 0 18m
rook-ceph-osd-1-765556c558-jwv2n 1/1 Running 0 17m
rook-ceph-osd-2-766db888c7-j7z8f 1/1 Running 0 18m
rook-ceph-osd-prepare-kube01-d2rrt 0/2 Completed 0 19m
rook-ceph-osd-prepare-kube02-jxl6g 0/2 Completed 0 19m
rook-ceph-osd-prepare-kube03-fz276 0/2 Completed 0 19m
rook-ceph-rgw-my-store-5b68744bc6-pc7g7 1/1 Running 0 12m
rook-ceph-tools-544fb656d-tddrx 1/1 Running 0 18m

[root@kube01 rook]# kubectl exec -it rook-ceph-tools-544fb656d-tddrx bash -n rook-ceph
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
[root@kube03 /]# ceph -s
cluster:
id: f9609ec9-62c7-4462-a4f2-35c4137c25ef
health: HEALTH_OK

services:
mon: 3 daemons, quorum c,a,b
mgr: a(active)
mds: myfs-1/1/1 up {0=myfs-b=up:active}, 1 up:standby-replay
osd: 3 osds: 3 up, 3 in
rgw: 1 daemon active

data:
pools: 8 pools, 800 pgs
objects: 224 objects, 5.9 KiB
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs: 800 active+clean

io:
client: 852 B/s rd, 1 op/s rd, 0 op/s wr

cluster/examples/kubernetes/ceph/ 目录下还有其他yaml文件,可以对ceph集群进行其他操作,比如启用mgr dashboar,安装prometheus监控等

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-rwxr-xr-x 1 root root  8139 Mar 15 15:20 cluster.yaml
-rw-r--r-- 1 root root 363 Mar 15 14:47 dashboard-external-https.yaml
-rw-r--r-- 1 root root 362 Mar 15 14:47 dashboard-external-http.yaml
-rw-r--r-- 1 root root 1487 Mar 15 14:47 ec-filesystem.yaml
-rw-r--r-- 1 root root 1538 Mar 15 14:47 ec-storageclass.yaml
-rw-r--r-- 1 root root 1375 Mar 15 14:47 filesystem.yaml
-rw-r--r-- 1 root root 1923 Mar 15 14:48 kube-registry.yaml
drwxr-xr-x 2 root root 85 Mar 15 16:07 monitoring
-rw-r--r-- 1 root root 160 Mar 15 14:47 object-user.yaml
-rw-r--r-- 1 root root 1813 Mar 15 14:47 object.yaml
-rwxr-xr-x 1 root root 12690 Mar 15 14:48 operator.yaml
-rw-r--r-- 1 root root 742 Mar 15 14:47 pool.yaml
-rw-r--r-- 1 root root 410 Mar 15 14:47 rgw-external.yaml
-rw-r--r-- 1 root root 1216 Mar 15 14:47 scc.yaml
-rw-r--r-- 1 root root 991 Mar 15 14:47 storageclass.yaml
-rw-r--r-- 1 root root 1544 Mar 15 14:48 toolbox.yaml
-rw-r--r-- 1 root root 6492 Mar 15 14:47 upgrade-from-v0.8-create.yaml
-rw-r--r-- 1 root root 874 Mar 15 14:47 upgrade-from-v0.8-replace.yaml

参考

  • http://www.yangguanjun.com/2018/12/22/rook-ceph-practice-part1/
  • http://www.yangguanjun.com/2018/12/28/rook-ceph-practice-part2/

Ceph RGW multi site 配置

Posted on 2019-03-09 | | Visitors:

概述

本文主要介绍如何配置Ceph RGW的异步复制功能,通过这个功能可以实现跨数据中心的灾备功能。
RGW多活方式是在同一zonegroup的多个zone之间进行,即同一zonegroup中多个zone之间的数据是完全一致的,用户可以通过任意zone读写同一份数据。 但是,对元数据的操作,比如创建桶、创建用户,仍然只能在master zone进行。对数据的操作,比如创建桶中的对象,访问对象等,可以在任意zone中 处理.

环境

实验环境是两个ceph集群,信息如下:
集群ceph101

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@ceph101 ~]# ceph -s
cluster:
id: d6e42188-9871-471b-9db0-957f47893902
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph101
mgr: ceph101(active)
osd: 3 osds: 3 up, 3 in
rgw: 1 daemon active

data:
pools: 4 pools, 32 pgs
objects: 187 objects, 1.09KiB
usage: 3.01GiB used, 56.7GiB / 59.7GiB avail
pgs: 32 active+clean

集群ceph102

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@ceph102 ~]# ceph -s
cluster:
id: 2e80de18-e95f-463f-9eb0-531fd3254f0b
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph102
mgr: ceph102(active)
osd: 3 osds: 3 up, 3 in
rgw: 1 daemon active

data:
pools: 4 pools, 32 pgs
objects: 187 objects, 1.09KiB
usage: 3.01GiB used, 56.7GiB / 59.7GiB avail
pgs: 32 active+clean

这两个ceph集群都一个rgw服务,本次实验就通过这两个ceph集群验证rgw multi site的配置,已经功能的验证。
本次实验已第一个集群(ceph101)做为主集群,ceph102作为备集群。

Multi Site 配置

在主集群创建一个名为realm100的realm

1
2
3
4
5
6
7
[root@ceph101 ~]# radosgw-admin realm create --rgw-realm=realm100 --default
{
"id": "337cd1c3-1ad0-4975-b220-e021a7f2b3eb",
"name": "realm100",
"current_period": "bd6ecbd6-3a28-46d7-a806-22e9ea001ca3",
"epoch": 1
}

创建master zonegroup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@ceph101 ~]# radosgw-admin zonegroup create --rgw-zonegroup=cn --endpoints=http://172.16.143.201:8080 --rgw-realm=realm100 --master --default
{
"id": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"name": "cn",
"api_name": "cn",
"is_master": "true",
"endpoints": [
"http://172.16.143.201:8080"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "",
"zones": [],
"placement_targets": [],
"default_placement": "",
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758"
}

创建master zone

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@ceph101 ~]# radosgw-admin zone create --rgw-zonegroup=cn --rgw-zone=shanghai --master --default --endpoints=http://172.16.143.201:8080
{
"id": "9c655173-6346-47e7-9759-5e5d32aa017d",
"name": "shanghai",
"domain_root": "shanghai.rgw.meta:root",
"control_pool": "shanghai.rgw.control",
"gc_pool": "shanghai.rgw.log:gc",
"lc_pool": "shanghai.rgw.log:lc",
"log_pool": "shanghai.rgw.log",
"intent_log_pool": "shanghai.rgw.log:intent",
"usage_log_pool": "shanghai.rgw.log:usage",
"reshard_pool": "shanghai.rgw.log:reshard",
"user_keys_pool": "shanghai.rgw.meta:users.keys",
"user_email_pool": "shanghai.rgw.meta:users.email",
"user_swift_pool": "shanghai.rgw.meta:users.swift",
"user_uid_pool": "shanghai.rgw.meta:users.uid",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "shanghai.rgw.buckets.index",
"data_pool": "shanghai.rgw.buckets.data",
"data_extra_pool": "shanghai.rgw.buckets.non-ec",
"index_type": 0,
"compression": ""
}
}
],
"metadata_heap": "",
"tier_config": [],
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758"
}

更新period

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
[root@ceph101 ~]# radosgw-admin period update --commit
{
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"epoch": 1,
"predecessor_uuid": "1abfe2d0-7453-4c01-881b-3db7f53f15c1",
"sync_status": [],
"period_map": {
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"zonegroups": [
{
"id": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"name": "cn",
"api_name": "cn",
"is_master": "true",
"endpoints": [
"http://172.16.143.201:8080"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"zones": [
{
"id": "9c655173-6346-47e7-9759-5e5d32aa017d",
"name": "shanghai",
"endpoints": [
"http://172.16.143.201:8080"
],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": []
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758"
}
],
"short_zone_ids": [
{
"key": "9c655173-6346-47e7-9759-5e5d32aa017d",
"val": 1556250220
}
]
},
"master_zonegroup": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"period_config": {
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
},
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758",
"realm_name": "realm100",
"realm_epoch": 2
}

创建同步用户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@ceph101 ~]# radosgw-admin user create --uid="syncuser" --display-name="Synchronization User" --system
{
"user_id": "syncuser",
"display_name": "Synchronization User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "syncuser",
"access_key": "LPTHGKYO5ULI48Q88AWF",
"secret_key": "jGFoORTVt72frRYsmcnPOpGXnz652Dl3C2IeBLN8"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"system": "true",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}

修改zone的key,并更新period

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
[root@ceph101 ~]# radosgw-admin zone modify --rgw-zone=shanghai --access-key=LPTHGKYO5ULI48Q88AWF --secret=jGFoORTVt72frRYsmcnPOpGXnz652Dl3C2IeBLN8

[root@ceph101 ~]# radosgw-admin period update --commit
{
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"epoch": 2,
"predecessor_uuid": "1abfe2d0-7453-4c01-881b-3db7f53f15c1",
"sync_status": [],
"period_map": {
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"zonegroups": [
{
"id": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"name": "cn",
"api_name": "cn",
"is_master": "true",
"endpoints": [
"http://172.16.143.201:8080"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"zones": [
{
"id": "9c655173-6346-47e7-9759-5e5d32aa017d",
"name": "shanghai",
"endpoints": [
"http://172.16.143.201:8080"
],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": []
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758"
}
],
"short_zone_ids": [
{
"key": "9c655173-6346-47e7-9759-5e5d32aa017d",
"val": 1556250220
}
]
},
"master_zonegroup": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"period_config": {
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
},
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758",
"realm_name": "realm100",
"realm_epoch": 2
}

删除默认zone和zonegroup

1
2
3
4
5
6
[root@ceph101 ~]# radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default
[root@ceph101 ~]# radosgw-admin period update --commit
[root@ceph101 ~]# radosgw-admin zone delete --rgw-zone=default
[root@ceph101 ~]# radosgw-admin period update --commit
[root@ceph101 ~]# radosgw-admin zonegroup delete --rgw-zonegroup=default
[root@ceph101 ~]# radosgw-admin period update --commit

删除默认pool

1
2
3
4
5
6
[root@ceph101 ~]# ceph osd pool delete default.rgw.control default.rgw.control --yes-i-really-really-mean-it
pool 'default.rgw.control' removed
[root@ceph101 ~]# ceph osd pool delete default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it
pool 'default.rgw.meta' removed
[root@ceph101 ~]# ceph osd pool delete default.rgw.log default.rgw.log --yes-i-really-really-mean-it
pool 'default.rgw.log' removed

修改rgw配置, 增加rgw_zone = shanghai

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@ceph101 ~]# vim /etc/ceph/ceph.conf

[global]
fsid = d6e42188-9871-471b-9db0-957f47893902



mon initial members = ceph101
mon host = 172.16.143.201

public network = 172.16.143.0/24
cluster network = 172.16.140.0/24


mon allow pool delete = true


[client.rgw.ceph101]
host = ceph101
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph101/keyring
log file = /var/log/ceph/ceph-rgw-ceph101.log
rgw frontends = civetweb port=172.16.143.201:8080 num_threads=100
rgw_zone = shanghai

重启rgw,并查看pool是否创建

1
2
3
4
5
6
[root@ceph101 ~]# systemctl restart ceph-radosgw@rgw.ceph101
[root@ceph101 ~]# ceph osd pool ls
.rgw.root
shanghai.rgw.control
shanghai.rgw.meta
shanghai.rgw.log

在secondy zone节点进行如下配置:

同步realm, 并设置realm100为默认的realm

1
2
3
4
5
6
7
8
9
10
[root@ceph102 ~]# radosgw-admin realm pull --url=http://172.16.143.201:8080 --access-key=LPTHGKYO5ULI48Q88AWF --secret=jGFoORTVt72frRYsmcnPOpGXnz652Dl3C2IeBLN8
2019-03-09 21:25:10.377485 7fa55aca2dc0 1 found existing latest_epoch 2 >= given epoch 2, returning r=-17
{
"id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758",
"name": "realm100",
"current_period": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"epoch": 2
}

[root@ceph102 ~]# radosgw-admin realm default --rgw-realm=realm100

更新period

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
[root@ceph102 ~]# radosgw-admin period pull --url=http://172.16.143.201:8080 --access-key=LPTHGKYO5ULI48Q88AWF --secret=jGFoORTVt72frRYsmcnPOpGXnz652Dl3C2IeBLN8
{
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"epoch": 2,
"predecessor_uuid": "1abfe2d0-7453-4c01-881b-3db7f53f15c1",
"sync_status": [],
"period_map": {
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"zonegroups": [
{
"id": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"name": "cn",
"api_name": "cn",
"is_master": "true",
"endpoints": [
"http://172.16.143.201:8080"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"zones": [
{
"id": "9c655173-6346-47e7-9759-5e5d32aa017d",
"name": "shanghai",
"endpoints": [
"http://172.16.143.201:8080"
],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": []
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758"
}
],
"short_zone_ids": [
{
"key": "9c655173-6346-47e7-9759-5e5d32aa017d",
"val": 1556250220
}
]
},
"master_zonegroup": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"period_config": {
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
},
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758",
"realm_name": "realm100",
"realm_epoch": 2
}

创建secondy zone

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@ceph102 ~]# radosgw-admin zone create --rgw-zonegroup=cn --rgw-zone=beijing --endpoints=http://172.16.143.202:8080 --access-key=LPTHGKYO5ULI48Q88AWF --secret=jGFoORTVt72frRYsmcnPOpGXnz652Dl3C2IeBLN8
2019-03-09 21:30:24.323527 7f2441e45dc0 0 failed reading obj info from .rgw.root:zone_info.9c655173-6346-47e7-9759-5e5d32aa017d: (2) No such file or directory
2019-03-09 21:30:24.323577 7f2441e45dc0 0 WARNING: could not read zone params for zone id=9c655173-6346-47e7-9759-5e5d32aa017d name=shanghai
{
"id": "bf59d999-1561-4f7e-a874-9de718d4c31b",
"name": "beijing",
"domain_root": "beijing.rgw.meta:root",
"control_pool": "beijing.rgw.control",
"gc_pool": "beijing.rgw.log:gc",
"lc_pool": "beijing.rgw.log:lc",
"log_pool": "beijing.rgw.log",
"intent_log_pool": "beijing.rgw.log:intent",
"usage_log_pool": "beijing.rgw.log:usage",
"reshard_pool": "beijing.rgw.log:reshard",
"user_keys_pool": "beijing.rgw.meta:users.keys",
"user_email_pool": "beijing.rgw.meta:users.email",
"user_swift_pool": "beijing.rgw.meta:users.swift",
"user_uid_pool": "beijing.rgw.meta:users.uid",
"system_key": {
"access_key": "LPTHGKYO5ULI48Q88AWF",
"secret_key": "jGFoORTVt72frRYsmcnPOpGXnz652Dl3C2IeBLN8"
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "beijing.rgw.buckets.index",
"data_pool": "beijing.rgw.buckets.data",
"data_extra_pool": "beijing.rgw.buckets.non-ec",
"index_type": 0,
"compression": ""
}
}
],
"metadata_heap": "",
"tier_config": [],
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758"
}

删除默认default zone, defaul zonegroup和 default存储池

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@ceph102 ~]# radosgw-admin zone delete --rgw-zone=default
2019-03-09 21:31:23.577581 7f2e6c528dc0 0 zone id ce963a6e-7d58-426d-b07a-a2af2983379a is not a part of zonegroup cn
[root@ceph102 ~]# radosgw-admin zonegroup list
{
"default_info": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"zonegroups": [
"cn",
"default"
]
}

[root@ceph102 ~]# radosgw-admin zonegroup delete --rgw-zonegroup=default

[root@ceph102 ~]# ceph osd pool delete default.rgw.control default.rgw.control --yes-i-really-really-mean-it
pool 'default.rgw.control' removed
[root@ceph102 ~]# ceph osd pool delete default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it
pool 'default.rgw.meta' removed
[root@ceph102 ~]# ceph osd pool delete default.rgw.log default.rgw.log --yes-i-really-really-mean-it
pool 'default.rgw.log' removed

修改rgw配置, 增加rgw_zone = beijing

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@ceph101 ~]# vim /etc/ceph/ceph.conf

[global]
fsid = d6e42188-9871-471b-9db0-957f47893902



mon initial members = ceph101
mon host = 172.16.143.201

public network = 172.16.143.0/24
cluster network = 172.16.140.0/24


mon allow pool delete = true


[client.rgw.ceph101]
host = ceph101
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph101/keyring
log file = /var/log/ceph/ceph-rgw-ceph101.log
rgw frontends = civetweb port=172.16.143.201:8080 num_threads=100
rgw_zone = shanghai

重启rgw,并查看pool是否创建

1
2
3
4
5
6
[root@ceph102 ~]# systemctl restart ceph-radosgw@rgw.ceph102
[root@ceph102 ~]# ceph osd pool ls
.rgw.root
beijing.rgw.control
beijing.rgw.meta
beijing.rgw.log

更新period

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
[root@ceph102 ~]# radosgw-admin period update --commit
2019-03-09 21:36:44.462809 7f23c756bdc0 1 Cannot find zone id=bf59d999-1561-4f7e-a874-9de718d4c31b (name=beijing), switching to local zonegroup configuration
Sending period to new master zone 9c655173-6346-47e7-9759-5e5d32aa017d
{
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"epoch": 3,
"predecessor_uuid": "1abfe2d0-7453-4c01-881b-3db7f53f15c1",
"sync_status": [],
"period_map": {
"id": "064c5b5e-eb87-462c-aa37-c420ddd68b23",
"zonegroups": [
{
"id": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"name": "cn",
"api_name": "cn",
"is_master": "true",
"endpoints": [
"http://172.16.143.201:8080"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"zones": [
{
"id": "9c655173-6346-47e7-9759-5e5d32aa017d",
"name": "shanghai",
"endpoints": [
"http://172.16.143.201:8080"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 0,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": []
},
{
"id": "bf59d999-1561-4f7e-a874-9de718d4c31b",
"name": "beijing",
"endpoints": [],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 0,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": []
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758"
}
],
"short_zone_ids": [
{
"key": "9c655173-6346-47e7-9759-5e5d32aa017d",
"val": 1556250220
},
{
"key": "bf59d999-1561-4f7e-a874-9de718d4c31b",
"val": 3557185953
}
]
},
"master_zonegroup": "2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033",
"master_zone": "9c655173-6346-47e7-9759-5e5d32aa017d",
"period_config": {
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
},
"realm_id": "ba638e8a-8a33-4607-8fe3-13aa69dd1758",
"realm_name": "realm100",
"realm_epoch": 2
}

查看同步状态

1
2
3
4
5
6
7
8
9
10
[root@ceph101 ~]# radosgw-admin sync status
realm ba638e8a-8a33-4607-8fe3-13aa69dd1758 (realm100)
zonegroup 2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033 (cn)
zone 9c655173-6346-47e7-9759-5e5d32aa017d (shanghai)
metadata sync no sync (zone is master)
data sync source: 0a2f706f-cc81-44f3-adf6-0d79c3e362ac (beijing)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
1
2
3
4
5
6
7
8
9
10
11
12
13
[root@ceph102 ~]# radosgw-admin sync status
realm ba638e8a-8a33-4607-8fe3-13aa69dd1758 (realm100)
zonegroup 2b3f1ea0-73e7-4f17-b4f9-1acf5d5c6033 (cn)
zone 0a2f706f-cc81-44f3-adf6-0d79c3e362ac (beijing)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 9c655173-6346-47e7-9759-5e5d32aa017d (shanghai)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source

验证

在master zone 创建一个test用户,在secondy zone 查看信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@ceph101 ~]# radosgw-admin user create --uid test --display-name="test user"
2019-03-09 22:16:27.795146 7f022820cdc0 0 WARNING: can't generate connection for zone 0a2f706f-cc81-44f3-adf6-0d79c3e362ac id beijing: no endpoints defined
{
"user_id": "test",
"display_name": "test user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "test",
"access_key": "9R9YENBPPDZ88TQGP32D",
"secret_key": "B5wtnLKNloIVxmSDISut6scU8MClv52dfInb3Omh"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}

secondy zone 查看用户

1
2
3
4
5
[root@ceph102 ~]# radosgw-admin user list
[
"syncuser",
"test"
]

在secondy zone 创建一个test2用户,在master zone 查看信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@ceph102 ~]# radosgw-admin user create --uid test@2 --display-name="test2 user"
{
"user_id": "test@2",
"display_name": "test2 user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "test@2",
"access_key": "9413B8F517W6LTEKN0H6",
"secret_key": "lg9vzVhpS2gYdXDnEOEUv3uCwkz0wkyuuBSnh8dl"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}

[root@ceph102 ~]# radosgw-admin user list
[
"syncuser",
"test@2",
"test"
]

在master zone 查看

1
2
3
4
5
[root@ceph101 ~]# radosgw-admin user list
[
"syncuser",
"test"
]

可以看到在master zone创建的用户,在secondy zone也可以看到
而在secondy zone创建的用户,在master zone看不到

通过test用户,在master zone 创建名为bucket1的bucket

1
2
3
4
5
6
[root@ceph101 ~]# s3cmd mb s3://bucket1
Bucket 's3://bucket1/' created
[root@ceph101 ~]# radosgw-admin bucket list
[
"bucket1"
]

在secondy zone查看

1
2
3
4
[root@ceph102 ~]# radosgw-admin bucket list
[
"bucket1"
]

通过test用户,在secondy zone 创建名为bucket2的bucket

1
2
3
4
5
6
7
[root@ceph102 ~]# s3cmd mb s3://bucket2
Bucket 's3://bucket2/' created
[root@ceph102 ~]# radosgw-admin bucket list
[
"bucket1",
"bucket2"
]

在master zone 查看

1
2
3
4
5
[root@ceph101 ~]# radosgw-admin bucket list
[
"bucket1",
"bucket2"
]

在master zone 上传文件

1
2
3
4
5
6
7
8
[root@ceph101 ~]# s3cmd put Python-3.4.9.tgz s3://bucket1/python3.4.9.tgz
upload: 'Python-3.4.9.tgz' -> 's3://bucket1/python3.4.9.tgz' [part 1 of 2, 15MB] [1 of 1]
15728640 of 15728640 100% in 2s 6.26 MB/s done
upload: 'Python-3.4.9.tgz' -> 's3://bucket1/python3.4.9.tgz' [part 2 of 2, 3MB] [1 of 1]
3955465 of 3955465 100% in 0s 32.73 MB/s done

[root@ceph101 ~]# md5sum Python-3.4.9.tgz
c706902881ef95e27e59f13fabbcdcac Python-3.4.9.tgz

在secondy zone 查看信息

1
2
3
4
5
6
7
[root@ceph102 ~]# s3cmd ls s3://bucket1
2019-03-09 15:34 19684105 s3://bucket1/python3.4.9.tgz
[root@ceph102 ~]# s3cmd get s3://bucket1/python3.4.9.tgz Python3.4.9.tgz
download: 's3://bucket1/python3.4.9.tgz' -> 'Python3.4.9.tgz' [1 of 1]
19684105 of 19684105 100% in 0s 51.81 MB/s done
[root@ceph102 ~]# md5sum Python3.4.9.tgz
c706902881ef95e27e59f13fabbcdcac Python3.4.9.tgz

在secondy zone 上传文件

1
2
3
4
5
[root@ceph102 ~]# s3cmd put anaconda-ks.cfg s3://bucket2/anaconda-ks.cfg
upload: 'anaconda-ks.cfg' -> 's3://bucket2/anaconda-ks.cfg' [1 of 1]
1030 of 1030 100% in 0s 8.28 kB/s done
[root@ceph102 ~]# md5sum anaconda-ks.cfg
1627b1ce985ce5befdd0d1cb0e6164ae anaconda-ks.cfg

在master zone 查看

1
2
3
4
5
6
7
[root@ceph101 tmp]# s3cmd ls s3://bucket2
2019-03-09 15:37 1030 s3://bucket2/anaconda-ks.cfg
[root@ceph101 tmp]# s3cmd get s3://bucket2/anaconda-ks.cfg anaconda-ks.cfg
download: 's3://bucket2/anaconda-ks.cfg' -> 'anaconda-ks.cfg' [1 of 1]
1030 of 1030 100% in 0s 23.32 kB/s done
[root@ceph101 tmp]# md5sum anaconda-ks.cfg
1627b1ce985ce5befdd0d1cb0e6164ae anaconda-ks.cfg

停止master zone的grup,然后在secondy zone上创建存储桶

1
[root@ceph101 tmp]# systemctl stop ceph-radosgw@rgw.ceph101
1
2
3
4
5
6
7
[root@ceph102 ~]# s3cmd mb s3://bucket4
WARNING: Retrying failed request: / (503 (ServiceUnavailable))
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: / (503 (ServiceUnavailable))
WARNING: Waiting 6 sec...
WARNING: Retrying failed request: / (503 (ServiceUnavailable))
WARNING: Waiting 9 sec...

可以看到在secondy zone上并不能创建bucket,之前在secondy zone上创建bucket,也是把请求转到master zone上。

反之,停止secondy zone的rgw,在master zone也是可以创建存储桶

1
[root@ceph102 ~]# systemctl stop ceph-radosgw@ceph102
1
2
[root@ceph101 tmp]# s3cmd mb s3://bucket5
Bucket 's3://bucket5/' created

参考

  • http://docs.ceph.com/docs/luminous/radosgw/multisite/
  • https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gateway_guide_for_red_hat_enterprise_linux/multi_site
  • https://www.jianshu.com/p/31a6f8df9a8f
  • http://stor.51cto.com/art/201807/578337.htm

Centos 磁盘扩容

Posted on 2019-01-30 | | Visitors:

概述

本文主要介绍如何在vmware环境中给centos7虚拟机进行扩容。
centos7默认磁盘用lvm管理,系统盘挂在一个xfs的lv上。

扩容前

先看一下扩容前的样子

虚拟机有一个40G的硬盘,根分区挂在了centos-root的lv上,大小是37.5G

扩容

先关闭虚拟机,通过vmware软件来给虚拟机的硬盘扩容。


通过上面的步骤,磁盘的空间扩展到了50G

可以看到,磁盘的空间已经是50G,接下来通过parted命令,把剩余的10G空间做出lvm分区。


把新创建的分区作为PV,并且添加到VG中。

扩大LV的空间

通过xfs_growfs命令来动态调整xfs文件系统的容量

最终我们可以看到根分区的文件系统扩大了10G。

Ceph mgr启动restful插件

Posted on 2018-12-10 | | Visitors:

概述

本文主要介绍如何开启ceph mgr restful插件,并通过这个restful接口获取ceph的数据。

环境信息如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@ceph11 ~]# ceph -s
cluster:
id: f8b6141c-5039-464d-be1e-61816208a006
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph12,ceph13,ceph14
mgr: ceph14(active), standbys: ceph12, ceph13
mds: cephfs-1/1/1 up {0=umstor12=up:active}
osd: 7 osds: 7 up, 7 in
rgw: 3 daemons active
rgw-nfs: 3 daemons active

data:
pools: 9 pools, 762 pgs
objects: 8835 objects, 5352 MB
usage: 24008 MB used, 6496 GB / 6519 GB avail
pgs: 762 active+clean

io:
client: 127 B/s wr, 0 op/s rd, 1 op/s wr
recovery: 71 B/s, 0 objects/s

启动插件

1
ceph mgr module enable restful

发现restful服务并没有启动,8003端口没有监听,要启动restful服务,还需要配置SSL cetificate(证书)。

下面的命令生产自签名证书:

1
ceph restful create-self-signed-cert

这个时候可以查看在active的mgr节点(ceph14)上,restful服务已经启动

1
2
[root@ceph14 ~]# netstat -nltp | grep 8003
tcp6 0 0 :::8003 :::* LISTEN 3551433/ceph-mgr

默认情况下,当前active的ceph-mgr daemon将绑定主机上任何可用的IPv4或IPv6地址的8003端口

指定IP和PORT

1
2
ceph config-key set mgr/restful/server_addr $IP
ceph config-key set mgr/restful/server_port $PORT

如果没有配置IP,则restful将会监听全部ip
如果没有配置Port,则restful将会监听在8003端口

上面的配置是针对全部mgr的,如果要针对某个mgr的配置,需要在配置中指定相应的mgr的hostname

1
2
ceph config-key set mgr/restful/$name/server_addr $IP
ceph config-key set mgr/restful/$name/server_port $PORT

创建用户

1
2
[root@ceph14 ~]# ceph restful create-key admin01
144276ee-1fdc-48ca-a358-0fb59bbb689f

后面的访问restful接口需要用到这个用户和密码

验证

启动restful插件后,可以通过浏览器进行访问并验证。

1
2
3
4
5
6
7
8
https://192.168.180.138:8003/

{
"api_version": 1,
"auth": "Use \"ceph restful create-key <key>\" to create a key pair, pass it as HTTP Basic auth to authenticate",
"doc": "See /doc endpoint",
"info": "Ceph Manager RESTful API server"
}

获取全部存储池的信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
https://192.168.180.138:8003/pool

[
{
"application_metadata": {
"rgw": {}
},
"auid": 0,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 1,
"erasure_code_profile": "",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "61",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 8,
"pgp_num": 8,
"pool": 1,
"pool_name": ".rgw.root",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": false,
"write_tier": -1
},
{
"application_metadata": {
"rgw": {}
},
"auid": -1,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 1,
"erasure_code_profile": "",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "64",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 8,
"pgp_num": 8,
"pool": 2,
"pool_name": "default.rgw.control",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
},
{
"application_metadata": {
"rgw": {}
},
"auid": -1,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 1,
"erasure_code_profile": "",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "67",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 8,
"pgp_num": 8,
"pool": 3,
"pool_name": "default.rgw.meta",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
},
{
"application_metadata": {
"rgw": {}
},
"auid": -1,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 1,
"erasure_code_profile": "",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "70",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 8,
"pgp_num": 8,
"pool": 4,
"pool_name": "default.rgw.log",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
},
{
"application_metadata": {
"rgw": {}
},
"auid": 0,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 1,
"erasure_code_profile": "rule_rgw",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "57",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 16,
"pgp_num": 16,
"pool": 5,
"pool_name": "default.rgw.buckets.index",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
},
{
"application_metadata": {
"rgw": {}
},
"auid": 0,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 1,
"erasure_code_profile": "rule_rgw",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "790",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "788",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 514,
"pgp_num": 514,
"pool": 6,
"pool_name": "default.rgw.buckets.data",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
},
{
"application_metadata": {
"cephfs": {}
},
"auid": 0,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 0,
"erasure_code_profile": "",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "136",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 128,
"pgp_num": 128,
"pool": 7,
"pool_name": "fs_data",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
},
{
"application_metadata": {
"cephfs": {}
},
"auid": 0,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 0,
"erasure_code_profile": "",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "136",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 64,
"pgp_num": 64,
"pool": 8,
"pool_name": "fs_metadata",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
},
{
"application_metadata": {
"rgw": {}
},
"auid": 0,
"cache_min_evict_age": 0,
"cache_min_flush_age": 0,
"cache_mode": "none",
"cache_target_dirty_high_ratio_micro": 600000,
"cache_target_dirty_ratio_micro": 400000,
"cache_target_full_ratio_micro": 800000,
"crash_replay_interval": 0,
"crush_rule": 0,
"erasure_code_profile": "",
"expected_num_objects": 0,
"fast_read": false,
"flags": 1,
"flags_names": "hashpspool",
"grade_table": [],
"hit_set_count": 0,
"hit_set_grade_decay_rate": 0,
"hit_set_params": {
"type": "none"
},
"hit_set_period": 0,
"hit_set_search_last_n": 0,
"last_change": "139",
"last_force_op_resend": "0",
"last_force_op_resend_preluminous": "0",
"min_read_recency_for_promote": 0,
"min_size": 2,
"min_write_recency_for_promote": 0,
"object_hash": 2,
"options": {},
"pg_num": 8,
"pgp_num": 8,
"pool": 9,
"pool_name": "default.rgw.buckets.non-ec",
"pool_snaps": [],
"quota_max_bytes": 0,
"quota_max_objects": 0,
"read_tier": -1,
"removed_snaps": "[]",
"size": 3,
"snap_epoch": 0,
"snap_mode": "selfmanaged",
"snap_seq": 0,
"stripe_width": 0,
"target_max_bytes": 0,
"target_max_objects": 0,
"tier_of": -1,
"tiers": [],
"type": 1,
"use_gmt_hitset": true,
"write_tier": -1
}
]

Python调用

可以通过requests来调用ceph mgr restful的接口,下面通过Python来获取全部存储池信息。

1
2
3
4
5
6
#! /usr/bin/env python3

import requests

r = requests.get('https://192.168.180.138:8003/pool',verify=False, auth=('admin','839df177-560e-421a-95fc-9f6a1c08236e'))
print(r.json())

参考

  • https://lnsyyj.github.io/2017/11/27/CEPH-MANAGER-DAEMON-RESTful-plugin/

SQL Join

Posted on 2018-12-09 | | Visitors:

现在有两张表,user和class,内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
MariaDB [jointest]> select * from user;
+------+------+----------+
| id | name | class_id |
+------+------+----------+
| 1 | aaa | 1 |
| 2 | bbb | 1 |
| 3 | ccc | 1 |
| 4 | ddd | 2 |
| 5 | eee | 2 |
| 6 | fff | 3 |
| 7 | ggg | 7 |
| 8 | hhh | 9 |
+------+------+----------+
8 rows in set (0.00 sec)

MariaDB [jointest]> select * from class;
+------+--------+
| id | name |
+------+--------+
| 1 | class1 |
| 2 | class2 |
| 3 | class3 |
| 4 | class4 |
| 5 | class5 |
+------+--------+
5 rows in set (0.00 sec)

inner join

1
2
3
4
5
6
7
8
9
10
11
12
MariaDB [jointest]> select * from user inner join class on user.class_id=class.id;
+------+------+----------+------+--------+
| id | name | class_id | id | name |
+------+------+----------+------+--------+
| 1 | aaa | 1 | 1 | class1 |
| 2 | bbb | 1 | 1 | class1 |
| 3 | ccc | 1 | 1 | class1 |
| 4 | ddd | 2 | 2 | class2 |
| 5 | eee | 2 | 2 | class2 |
| 6 | fff | 3 | 3 | class3 |
+------+------+----------+------+--------+
6 rows in set (0.00 sec)

left join

1
2
3
4
5
6
7
8
9
10
11
12
13
14
MariaDB [jointest]> select * from user left join class on user.class_id=class.id;
+------+------+----------+------+--------+
| id | name | class_id | id | name |
+------+------+----------+------+--------+
| 1 | aaa | 1 | 1 | class1 |
| 2 | bbb | 1 | 1 | class1 |
| 3 | ccc | 1 | 1 | class1 |
| 4 | ddd | 2 | 2 | class2 |
| 5 | eee | 2 | 2 | class2 |
| 6 | fff | 3 | 3 | class3 |
| 7 | ggg | 7 | NULL | NULL |
| 8 | hhh | 9 | NULL | NULL |
+------+------+----------+------+--------+
8 rows in set (0.00 sec)

right join

1
2
3
4
5
6
7
8
9
10
11
12
13
14
MariaDB [jointest]> select * from user right join class on user.class_id=class.id;
+------+------+----------+------+--------+
| id | name | class_id | id | name |
+------+------+----------+------+--------+
| 1 | aaa | 1 | 1 | class1 |
| 2 | bbb | 1 | 1 | class1 |
| 3 | ccc | 1 | 1 | class1 |
| 4 | ddd | 2 | 2 | class2 |
| 5 | eee | 2 | 2 | class2 |
| 6 | fff | 3 | 3 | class3 |
| NULL | NULL | NULL | 4 | class4 |
| NULL | NULL | NULL | 5 | class5 |
+------+------+----------+------+--------+
8 rows in set (0.00 sec)

full join

mysql不知吃full join,不过可以通过union 合并left jion和right jion的结果来模拟full jion。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
MariaDB [jointest]> select * from user left join class on user.class_id=class.id
-> union
-> select * from user right join class on user.class_id=class.id;
+------+------+----------+------+--------+
| id | name | class_id | id | name |
+------+------+----------+------+--------+
| 1 | aaa | 1 | 1 | class1 |
| 2 | bbb | 1 | 1 | class1 |
| 3 | ccc | 1 | 1 | class1 |
| 4 | ddd | 2 | 2 | class2 |
| 5 | eee | 2 | 2 | class2 |
| 6 | fff | 3 | 3 | class3 |
| 7 | ggg | 7 | NULL | NULL |
| 8 | hhh | 9 | NULL | NULL |
| NULL | NULL | NULL | 4 | class4 |
| NULL | NULL | NULL | 5 | class5 |
+------+------+----------+------+--------+
10 rows in set (0.00 sec)

cross join

user表一共有8条记录,class表一共有5条记录,cross join一同有8*5=40条结果。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
MariaDB [jointest]> select * from user cross join class;
+------+------+----------+------+--------+
| id | name | class_id | id | name |
+------+------+----------+------+--------+
| 1 | aaa | 1 | 1 | class1 |
| 1 | aaa | 1 | 2 | class2 |
| 1 | aaa | 1 | 3 | class3 |
| 1 | aaa | 1 | 4 | class4 |
| 1 | aaa | 1 | 5 | class5 |
| 2 | bbb | 1 | 1 | class1 |
| 2 | bbb | 1 | 2 | class2 |
| 2 | bbb | 1 | 3 | class3 |
| 2 | bbb | 1 | 4 | class4 |
| 2 | bbb | 1 | 5 | class5 |
| 3 | ccc | 1 | 1 | class1 |
| 3 | ccc | 1 | 2 | class2 |
| 3 | ccc | 1 | 3 | class3 |
| 3 | ccc | 1 | 4 | class4 |
| 3 | ccc | 1 | 5 | class5 |
| 4 | ddd | 2 | 1 | class1 |
| 4 | ddd | 2 | 2 | class2 |
| 4 | ddd | 2 | 3 | class3 |
| 4 | ddd | 2 | 4 | class4 |
| 4 | ddd | 2 | 5 | class5 |
| 5 | eee | 2 | 1 | class1 |
| 5 | eee | 2 | 2 | class2 |
| 5 | eee | 2 | 3 | class3 |
| 5 | eee | 2 | 4 | class4 |
| 5 | eee | 2 | 5 | class5 |
| 6 | fff | 3 | 1 | class1 |
| 6 | fff | 3 | 2 | class2 |
| 6 | fff | 3 | 3 | class3 |
| 6 | fff | 3 | 4 | class4 |
| 6 | fff | 3 | 5 | class5 |
| 7 | ggg | 7 | 1 | class1 |
| 7 | ggg | 7 | 2 | class2 |
| 7 | ggg | 7 | 3 | class3 |
| 7 | ggg | 7 | 4 | class4 |
| 7 | ggg | 7 | 5 | class5 |
| 8 | hhh | 9 | 1 | class1 |
| 8 | hhh | 9 | 2 | class2 |
| 8 | hhh | 9 | 3 | class3 |
| 8 | hhh | 9 | 4 | class4 |
| 8 | hhh | 9 | 5 | class5 |
+------+------+----------+------+--------+
40 rows in set (0.01 sec)

mysqldump

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
-- MySQL dump 10.14  Distrib 5.5.60-MariaDB, for Linux (x86_64)
--
-- Host: localhost Database: jointest
-- ------------------------------------------------------
-- Server version 5.5.60-MariaDB

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;

--
-- Current Database: `jointest`
--

CREATE DATABASE /*!32312 IF NOT EXISTS*/ `jointest` /*!40100 DEFAULT CHARACTER SET latin1 */;

USE `jointest`;

--
-- Table structure for table `class`
--

DROP TABLE IF EXISTS `class`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `class` (
`id` varchar(10) DEFAULT NULL,
`name` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `class`
--

LOCK TABLES `class` WRITE;
/*!40000 ALTER TABLE `class` DISABLE KEYS */;
INSERT INTO `class` VALUES ('1','class1'),('2','class2'),('3','class3'),('4','class4'),('5','class5');
/*!40000 ALTER TABLE `class` ENABLE KEYS */;
UNLOCK TABLES;

--
-- Table structure for table `user`
--

DROP TABLE IF EXISTS `user`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `user` (
`id` varchar(10) DEFAULT NULL,
`name` varchar(20) DEFAULT NULL,
`class_id` varchar(10) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `user`
--

LOCK TABLES `user` WRITE;
/*!40000 ALTER TABLE `user` DISABLE KEYS */;
INSERT INTO `user` VALUES ('1','aaa','1'),('2','bbb','1'),('3','ccc','1'),('4','ddd','2'),('5','eee','2'),('6','fff','3'),('7','ggg','7'),('8','hhh','9');
/*!40000 ALTER TABLE `user` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;

/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

-- Dump completed on 2018-12-09 17:54:20

参考

  • https://www.oschina.net/translate/mysql-joins-on-vs-using-vs-theta-style
  • https://www.cnblogs.com/BeginMan/p/3754322.html
  • http://blog.bittiger.io/post198/

通过kubeadm搭建单节点k8s环境

Posted on 2018-12-08 | | Visitors:

本次实验,通过kubeadm来安装一个单节点的k8s环境。

本次实验是在虚拟机上进行,虚拟机的配置如下:

OS CPU 内存 IP
CentOS Linux release 7.6.1810 (Core) 2 vCPU 2G 172.16.143.171

环境准备

安装docker

1
2
3
4
5
6
7
[kube@kube ~]$ yum update
[kube@kube ~]$ yum install docker -y
[kube@kube ~]$ systemctl start docker
[kube@kube ~]$ systemctl enable docker

[kube@kube ~]$ docker --version
Docker version 1.13.1, build 07f3374/1.13.1

关闭Swap

1
swapoff -a

安装k8s

配置kubernetes阿里云源

1
2
3
4
5
6
7
8
9
10
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

关闭selinux

1
2
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

安装kubeadm, kubelet和kubectl

1
2
3
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable kubelet && systemctl start kubelet

在centos系统上设置iptables

1
2
3
4
5
6
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

kubeadm默认会从k8s.gcr.io上下载kube的images,但是在国内环境是访问不了这些镜像的,所以可以从aliyun的registry上下载相应的image,然后修改tag,瞒过kubeadm。

先通过下面的命令查看当前需要哪些image

1
2
3
4
5
6
7
8
9
10
[root@kube ~]# kubeadm config images list
I1208 16:30:01.471171 30018 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1208 16:30:01.471304 30018 version.go:95] falling back to the local client version: v1.13.0
k8s.gcr.io/kube-apiserver:v1.13.0
k8s.gcr.io/kube-controller-manager:v1.13.0
k8s.gcr.io/kube-scheduler:v1.13.0
k8s.gcr.io/kube-proxy:v1.13.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

通过下面的方式可以从阿里云的registry上下载镜像并修改tag

1
2
3
4
5
6
7
8
9
10
11
12
13
14
images=(
kube-apiserver:v1.13.0
kube-controller-manager:v1.13.0
kube-scheduler:v1.13.0
kube-proxy:v1.13.0
pause:3.1
etcd:3.2.24
coredns:1.2.6
)

for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done

初始化集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
[root@kube ~]# kubeadm init
I1208 16:38:21.289892 31021 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1208 16:38:21.289971 31021 version.go:95] falling back to the local client version: v1.13.0
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube localhost] and IPs [172.16.143.171 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube localhost] and IPs [172.16.143.171 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.143.171]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.003474 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kube" as an annotation
[mark-control-plane] Marking the node kube as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kube as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7ywghw.pbq0fkwpz3c5jozk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 172.16.143.171:6443 --token 7ywghw.pbq0fkwpz3c5jozk --discovery-token-ca-cert-hash sha256:b30445a8098e1e025ce703e7c1bae82567e0f892039489630d39608e77a41b89

从上面的结果可以看出k8s master已经初始化成功,k8s推进使用非root用户使用集群,所以下面我们创建一个kube的用户,并配置sudo权限。

1
2
useradd kube
passwd kube

通过visudo给kube用户配置sudo权限

1
kube    ALL=(ALL)       ALL

下面的步骤把k8s的配置拷贝到用户的.kube目录下

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

要使用k8s集群,还需要安装网络插件,k8s支持很多网络插件,比如calico,flannel,weave等,下面我们就安装weave网络插件。

配置Weave Net

1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

默认情况下,master node是不会运行容器的,由于本次实验只有一个节点,所以需要设置master node运行容器。

允许Master Node 运行容器

1
2
[kube@kube ~]$ kubectl taint nodes --all node-role.kubernetes.io/master-
node/kube untainted

这样一个简单的k8s集群就算搭建完成了,通过下面的命令可以看到当前集群中的节点,当前集群中运行的pod。

1
2
3
4
5
6
7
8
9
10
11
12
13
[kube@kube ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
kube Ready master 41m v1.13.0
[kube@kube ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-89xn5 1/1 Running 0 41m
kube-system coredns-86c58d9df4-mb96l 1/1 Running 0 41m
kube-system etcd-kube 1/1 Running 2 40m
kube-system kube-apiserver-kube 1/1 Running 2 40m
kube-system kube-controller-manager-kube 1/1 Running 1 40m
kube-system kube-proxy-tgk25 1/1 Running 0 41m
kube-system kube-scheduler-kube 1/1 Running 1 40m
kube-system weave-net-csgmh 2/2 Running 0 27m

参考

  • https://kubernetes.io/docs/setup/independent/install-kubeadm/
  • https://zhuanlan.zhihu.com/p/46341911

通过ceph-deploy部署ceph

Posted on 2018-11-27 | | Visitors:

之前一直使用ceph-ansible来部署ceph,ceph-ansible在大规模部署的情况下比较合适,而且支持各种部署方式。
现在遇到的场景是是集群需要动态的调整,一开始是一个小规模的集群,后续需要动态增删服务来动态调整集群。
ceph-ansible并没有单独添加删除某个服务的脚本,并不适合这种情况;而ceph-deploy可以比较方便的支持服务的添加和删除,可以满足这种场景。

下面通过实验来验证ceph-deploy部署ceph集群的各种服务。

环境信息

本次实验共有4台虚拟机,具体信息如下

Hostname OS Public network Cluster network Role
ceph001 CentOS Linux release 7.5.1804 (Core) 172.16.143.151 172.16.140.151 ceph-deply
ceph002 CentOS Linux release 7.5.1804 (Core) 172.16.143.152 172.16.140.152 mon osd mgr rgw mds
ceph003 CentOS Linux release 7.5.1804 (Core) 172.16.143.153 172.16.140.153 mon osd mgr rgw mds
ceph004 CentOS Linux release 7.5.1804 (Core) 172.16.143.154 172.16.140.154 mon osd mgr rgw mds

配置

  • 关闭防火墙和SELinux
  • 配置ceph001到ceph002~4的免密登录
  • 配置ceph的国内源

ceph源配置,本次实验安装的ceph版本是luminous

1
2
3
4
5
6
7
8
9
10
11
[ceph_stable]
baseurl = http://mirrors.ustc.edu.cn/ceph/rpm-luminous/el7/$basearch
gpgcheck = 1
gpgkey = http://mirrors.ustc.edu.cn/ceph/keys/release.asc
name = Ceph Stable repo

[ceph_noarch]
name=Ceph noarch packages
baseurl=http://mirrors.ustc.edu.cn/ceph/rpm-luminous/el7/noarch/
gpgcheck=1
gpgkey=http://mirrors.ustc.edu.cn/ceph/keys/release.asc

安装

安装软件

在ceph001节点安装ceph-deploy,并且创建ceph集群

1
2
3
yum install ceph-deploy -y
ceph-deploy install --release luminous ceph002 ceph003 ceph004
ceph-deploy new --cluster-network 172.16.140.0/24 --public-network 172.16.143.0/24 ceph002 ceph003 ceph004

当前目录下会生成ceph.conf和 ceph.mon.keying

ceph.conf

1
2
3
4
5
6
7
8
9
[global]
fsid = 9ef68d6d-5117-4064-8969-39f51e91557e
public_network = 172.16.143.0/24
cluster_network = 172.16.140.0/24
mon_initial_members = ceph002, ceph003, ceph004
mon_host = 172.16.143.152,172.16.143.153,172.16.143.154
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

部署MON

1
ceph-deploy mon create ceph002 ceph003 ceph004

收集key

1
ceph-deploy gatherkeys ceph002 ceph003 ceph004

创建OSD

1
2
3
ceph-deploy osd create ceph002 --data /dev/sdb
ceph-deploy osd create ceph003 --data /dev/sdb
ceph-deploy osd create ceph004 --data /dev/sdb

部署MGR

1
ceph-deploy mgr create ceph002 ceph003 ceph004

允许管理员执行ceph命令

1
ceph-deploy admin ceph002 ceph003 ceph004

部署RGW

1
ceph-deploy rgw create ceph002 ceph003 ceph004

部署MDS

1
ceph-deploy mds create ceph002 ceph003 ceph004
1
2
3
ceph osd pool create cephfs_metadata 8 8
ceph osd pool create cephfs_data 8 8
ceph fs new cephfs_demo cephfs_metadata cephfs_data

截图

总结

ceph-deploy适合小规模集群的部署,并且可以满足集群的动态调整。
另外,当前版本的ceph-deploy已经使用ceph-volume替换ceph-disk。

123
yanyx

yanyx

24 posts
10 tags
RSS
GitHub E-Mail
© 2017 — 2019 yanyx
Powered by Hexo
|
Theme — NexT.Mist v5.1.3