侧边栏壁纸
博主头像
一揽芳华 博主等级

行动起来,活在当下

  • 累计撰写 265 篇文章
  • 累计创建 24 个标签
  • 累计收到 4 条评论

目 录CONTENT

文章目录

K8S集群中使用rook部署cpeh

芳华是个男孩!
2024-11-06 / 0 评论 / 0 点赞 / 12 阅读 / 0 字
广告 广告

1、环境要求

  • k8s集群:1.16版本+
  • k8s至少3个工作节点
  • 每个工作节点至少有一块未使用的硬盘
  • rook支持部署Ceph Nautilus以上版本

rook官网:https://rook.io/docs/rook/latest-release/Getting-Started/intro/
githb: https://github.com/rook/rook
rook最新版本:v1.13.7

2、准备工作

查看集群信息

[root@k8s-master01 ~]# kubectl get node 
NAME           STATUS   ROLES                       AGE   VERSION
k8s-master01   Ready    control-plane,etcd,master   42h   v1.24.14+rke2r1
k8s-master02   Ready    control-plane,etcd,master   42h   v1.24.14+rke2r1
k8s-master03   Ready    control-plane,etcd,master   42h   v1.24.14+rke2r1
k8s-node01     Ready    <none>                      42h   v1.24.14+rke2r1
k8s-node02     Ready    <none>                      42h   v1.24.14+rke2r1
k8s-node03     Ready    <none>                      42h   v1.24.14+rke2r1

如果实验环境是1主2从,为了保证至少3个节点。master节点如果有污点需要去掉。

查看是否存在污点

[root@k8s-master01 ~]# kubectl describe nodes | grep Taints:
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>
Taints:             <none>

拉取rook代码

[root@k8s-master01 ~]# git clone https://github.com/rook/rook.git
Cloning into 'rook'...
remote: Enumerating objects: 127490, done.
remote: Counting objects: 100% (6499/6499), done.
remote: Compressing objects: 100% (355/355), done.
remote: Total 127490 (delta 6282), reused 6180 (delta 6144), pack-reused 120991
Receiving objects: 100% (127490/127490), 59.90 MiB | 2.26 MiB/s, done.
Resolving deltas: 100% (88478/88478), done.
[root@k8s-master01 ~]# ls

3、部署环境

3.1、部署环境

进入相关目录

[root@k8s-master01 ~]# cd rook/deploy/examples/
[root@k8s-master01 examples]# ls
bucket-notification-endpoint.yaml           csi                            object-bucket-claim-a.yaml             pool-builtin-mgr.yaml
bucket-notification.yaml                    csi-ceph-conf-override.yaml    object-bucket-claim-delete.yaml        pool-ec.yaml
bucket-topic.yaml                           dashboard-external-https.yaml  object-bucket-claim-notification.yaml  pool-mirrored.yaml
ceph-client.yaml                            dashboard-external-http.yaml   object-bucket-claim-retain.yaml        pool-test.yaml
cluster-external-management.yaml            dashboard-ingress-https.yaml   object-b.yaml                          pool.yaml
cluster-external.yaml                       dashboard-loadbalancer.yaml    object-ec.yaml                         psp.yaml
cluster-multus-test.yaml                    direct-mount.yaml              object-external.yaml                   radosnamespace.yaml
cluster-on-local-pvc.yaml                   filesystem-ec.yaml             object-multisite-pull-realm-test.yaml  rbdmirror.yaml
cluster-on-pvc-minikube.yaml                filesystem-mirror.yaml         object-multisite-pull-realm.yaml       README.md
cluster-on-pvc.yaml                         filesystem-test.yaml           object-multisite-test.yaml             rgw-external.yaml
cluster-stretched-aws.yaml                  filesystem.yaml                object-multisite.yaml                  sqlitevfs-client.yaml
cluster-stretched.yaml                      images.txt                     object-openshift.yaml                  storageclass-bucket-a.yaml
cluster-test.yaml                           import-external-cluster.sh     object-shared-pools-test.yaml          storageclass-bucket-delete.yaml
cluster.yaml                                kustomization.yaml             object-shared-pools.yaml               storageclass-bucket-retain.yaml
common-external.yaml                        monitoring                     object-test.yaml                       subvolumegroup.yaml
common-second-cluster.yaml                  multus-validation.yaml         object-user.yaml                       toolbox-job.yaml
common.yaml                                 mysql.yaml                     object.yaml                            toolbox-operator-image.yaml
cosi                                        nfs-load-balancer.yaml         operator-openshift.yaml                toolbox.yaml
crds.yaml                                   nfs-test.yaml                  operator.yaml                          volume-replication-class.yaml
create-external-cluster-resources.py        nfs.yaml                       osd-env-override.yaml                  volume-replication.yaml
create-external-cluster-resources-tests.py  object-a.yaml                  osd-purge.yaml                         wordpress.yaml

创建rook

[root@k8s-master01 examples]# kubectl apply -f crds.yaml -f common.yaml -f operator.yaml

部署ceph

[root@k8s-master01 examples]# kubectl apply -f cluster.yaml

部署Rook Ceph工具

[root@k8s-master01 examples]# kubectl apply  -f toolbox.yaml

部署Ceph UI工具

[root@k8s-master01 examples]# kubectl apply  -f dashboard-external-https.yaml

等待全部拉去完成,时间很长,最后完成如下图

[root@k8s-master01 ~]# kubectl get pod -n rook-ceph 
NAME                                                     READY   STATUS      RESTARTS       AGE
csi-cephfsplugin-9wxt8                                   2/2     Running     1 (85m ago)    86m
csi-cephfsplugin-gzmqh                                   2/2     Running     1 (127m ago)   128m
csi-cephfsplugin-mggt9                                   2/2     Running     0              128m
csi-cephfsplugin-provisioner-6fcc66b9c4-m2tfq            5/5     Running     4 (125m ago)   128m
csi-cephfsplugin-provisioner-6fcc66b9c4-wbtnt            5/5     Running     3 (123m ago)   128m
csi-cephfsplugin-rs2qn                                   2/2     Running     1 (127m ago)   128m
csi-cephfsplugin-rth25                                   2/2     Running     1 (127m ago)   128m
csi-cephfsplugin-ztf5j                                   2/2     Running     1 (127m ago)   128m
csi-rbdplugin-4qdsc                                      2/2     Running     0              128m
csi-rbdplugin-8gps6                                      2/2     Running     2 (113m ago)   128m
csi-rbdplugin-bdx57                                      2/2     Running     1 (127m ago)   128m
csi-rbdplugin-n8k65                                      2/2     Running     1 (127m ago)   128m
csi-rbdplugin-provisioner-789598df9c-b9vh7               5/5     Running     4 (125m ago)   128m
csi-rbdplugin-provisioner-789598df9c-k7zl7               5/5     Running     4 (123m ago)   128m
csi-rbdplugin-vqb4z                                      2/2     Running     1 (127m ago)   128m
csi-rbdplugin-w22rz                                      2/2     Running     1 (127m ago)   128m
rook-ceph-crashcollector-k8s-master01-6c6cfbddb5-4qqss   1/1     Running     0              126m
rook-ceph-crashcollector-k8s-master02-ddb7b55bd-c9qdn    1/1     Running     0              75m
rook-ceph-crashcollector-k8s-master03-5c99f48b5f-mfrws   1/1     Running     0              125m
rook-ceph-crashcollector-k8s-node01-ddd6dc5d7-bfbwp      1/1     Running     0              130m
rook-ceph-crashcollector-k8s-node02-749d5cb5bd-4x87q     1/1     Running     0              131m
rook-ceph-crashcollector-k8s-node03-7c45c7884c-4vk6q     1/1     Running     0              75m
rook-ceph-exporter-k8s-master01-688ddb8bb4-vl9gh         1/1     Running     0              126m
rook-ceph-exporter-k8s-master02-6dcf649765-dqxzp         1/1     Running     0              75m
rook-ceph-exporter-k8s-master03-7f95bf6746-mz26h         1/1     Running     0              125m
rook-ceph-exporter-k8s-node01-bd8d4ccd-v6llg             1/1     Running     0              130m
rook-ceph-exporter-k8s-node02-65b7cb9dbf-d8k6k           1/1     Running     0              131m
rook-ceph-exporter-k8s-node03-5c6f74b56b-cszsc           1/1     Running     0              75m
rook-ceph-mds-myfs-a-dfcc7b477-vkssn                     2/2     Running     0              75m
rook-ceph-mds-myfs-b-7bd464db6c-5psck                    2/2     Running     0              75m
rook-ceph-mgr-a-6dcddbdf9b-v878r                         3/3     Running     0              131m
rook-ceph-mgr-b-7c886b669f-xcnjr                         3/3     Running     0              131m
rook-ceph-mon-a-79bcb7c9d4-l5lqm                         2/2     Running     0              132m
rook-ceph-mon-b-589c9df79b-q496v                         2/2     Running     0              131m
rook-ceph-mon-c-66567476fc-gq4mf                         2/2     Running     0              131m
rook-ceph-operator-58b85db8cc-g4xkz                      1/1     Running     0              139m
rook-ceph-osd-0-65d64666c6-mqsj7                         2/2     Running     0              131m
rook-ceph-osd-1-cf674ff67-hm9d2                          2/2     Running     0              130m
rook-ceph-osd-2-647646ddc8-6kwr5                         2/2     Running     0              130m
rook-ceph-osd-3-6b7dc687f6-rz4pd                         2/2     Running     0              126m
rook-ceph-osd-4-6b7cd56c94-mkpsf                         2/2     Running     0              125m
rook-ceph-osd-5-64b48cc58b-wx4tz                         2/2     Running     0              125m
rook-ceph-osd-prepare-k8s-master01-lbrbz                 0/1     Completed   0              124m
rook-ceph-osd-prepare-k8s-master02-8lrbg                 0/1     Completed   0              124m
rook-ceph-osd-prepare-k8s-master03-lxrn5                 0/1     Completed   0              124m
rook-ceph-osd-prepare-k8s-node01-c7zlf                   0/1     Completed   0              124m
rook-ceph-osd-prepare-k8s-node02-httvc                   0/1     Completed   0              124m
rook-ceph-osd-prepare-k8s-node03-dcx55                   0/1     Completed   0              125m
rook-ceph-tools-5464d6745c-dmr4g                         1/1     Running     0              137m

登录dashboard,获取密码

[root@k8s-master01 ~]# kubectl get svc -n rook-ceph 
NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
rook-ceph-exporter                       ClusterIP   10.43.252.218   <none>        9926/TCP            26m
rook-ceph-mgr                            ClusterIP   10.43.86.246    <none>        9283/TCP            26m
rook-ceph-mgr-dashboard                  ClusterIP   10.43.15.17     <none>        8443/TCP            26m
rook-ceph-mgr-dashboard-external-https   NodePort    10.43.83.138    <none>        8443:30163/TCP      30m
rook-ceph-mon-a                          ClusterIP   10.43.225.12    <none>        6789/TCP,3300/TCP   27m
rook-ceph-mon-b                          ClusterIP   10.43.20.196    <none>        6789/TCP,3300/TCP   26m
rook-ceph-mon-c                          ClusterIP   10.43.196.174   <none>        6789/TCP,3300/TCP   26m
  • 通过查看service,dashboard使用的nodeport,映射端口为:30163

  • 获取dashboard密码

# 查看dashboard所在的secret
[root@k8s-master01 examples]# kubectl get secrets -n rook-ceph 
NAME                                TYPE                 DATA   AGE
cluster-peer-token-rook-ceph        kubernetes.io/rook   2      21m
rook-ceph-admin-keyring             kubernetes.io/rook   1      22m
rook-ceph-config                    kubernetes.io/rook   2      22m
rook-ceph-crash-collector-keyring   kubernetes.io/rook   1      21m
rook-ceph-dashboard-password        kubernetes.io/rook   1      21m
rook-ceph-exporter-keyring          kubernetes.io/rook   1      21m
rook-ceph-mgr-a-keyring             kubernetes.io/rook   1      21m
rook-ceph-mgr-b-keyring             kubernetes.io/rook   1      21m
rook-ceph-mon                       kubernetes.io/rook   4      22m
rook-ceph-mons-keyring              kubernetes.io/rook   1      22m
rook-csi-cephfs-node                kubernetes.io/rook   2      21m
rook-csi-cephfs-provisioner         kubernetes.io/rook   2      21m
rook-csi-rbd-node                   kubernetes.io/rook   2      21m
rook-csi-rbd-provisioner            kubernetes.io/rook   2      21m

# 将dashboard的secret以yaml文件的形式展现
[root@k8s-master01 examples]# kubectl get secrets -n rook-ceph rook-ceph-dashboard-password -o yaml
apiVersion: v1
data:
  password: PDI3eCxCW198VU0pWT0vVF5URm8=			# 这个就是密码不过是加密的。
kind: Secret
metadata:
  creationTimestamp: "2024-03-18T08:09:10Z"
  name: rook-ceph-dashboard-password
  namespace: rook-ceph
  ownerReferences:
  - apiVersion: ceph.rook.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: CephCluster
    name: rook-ceph
    uid: 7b0001c3-cf5c-4180-a293-72339216a532
  resourceVersion: "699877"
  uid: 56520bec-ce15-49c5-abb5-fca6aa14c51d
type: kubernetes.io/rook

# 使用base64对PDI3eCxCW198VU0pWT0vVF5URm8=进行解密
[root@k8s-master01 examples]# echo 'PDI3eCxCW198VU0pWT0vVF5URm8=' | base64 -d
<27x,B[_|UM)Y=/T^TFo[root@k8s-master01 examples]#

# 最后得到密码:<27x,B[_|UM)Y=/T^TFo

登录控制台修改密码

进入到专用工具环境

[root@k8s-master01 ~]# kubectl -n rook-ceph exec -it rook-ceph-tools-5464d6745c-dmr4g -- /bin/bash
bash-4.4$

检测ceph环境

[root@k8s-master01 ~]# kubectl -n rook-ceph exec -it rook-ceph-tools-5464d6745c-dmr4g -- /bin/bash
bash-4.4$ ceph -s
  cluster:
    id:     d1d41650-16b0-4a52-92d7-07fecdda45c0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 48m)
    mgr: a(active, since 41m), standbys: b
    osd: 6 osds: 6 up (since 41m), 6 in (since 42m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 2 objects, 449 KiB
    usage:   560 MiB used, 1.2 TiB / 1.2 TiB avail
    pgs:     1 active+clean
 
bash-4.4$ ceph osd tree
ID   CLASS  WEIGHT   TYPE NAME              STATUS  REWEIGHT  PRI-AFF
 -1         1.17178  root default                                    
 -9         0.19530      host k8s-master01                           
  3    hdd  0.19530          osd.3              up   1.00000  1.00000
-13         0.19530      host k8s-master02                           
  5    hdd  0.19530          osd.5              up   1.00000  1.00000
-11         0.19530      host k8s-master03                           
  4    hdd  0.19530          osd.4              up   1.00000  1.00000
 -7         0.19530      host k8s-node01                             
  2    hdd  0.19530          osd.2              up   1.00000  1.00000
 -5         0.19530      host k8s-node02                             
  1    hdd  0.19530          osd.1              up   1.00000  1.00000
 -3         0.19530      host k8s-node03                             
  0    hdd  0.19530          osd.0              up   1.00000  1.00000

bash-4.4$ ceph df 
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    1.2 TiB  1.2 TiB  560 MiB   560 MiB       0.05
TOTAL  1.2 TiB  1.2 TiB  560 MiB   560 MiB       0.05
 
--- POOLS ---
POOL  ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr   1    1  449 KiB        2  1.3 MiB      0    379 GiB

bash-4.4$ ceph osd pool ls
.mgr

3.2、部署rbd和cephfs环境

部署rbd环境

[root@k8s-master01 examples]# kubectl apply -f csi/rbd/storageclass.yaml

部署cephfs环境

[root@k8s-master01 examples]# kubectl apply -f filesystem.yaml
[root@k8s-master01 examples]# kubectl apply -f csi/cephfs/storageclass.yaml

检测环境

[root@k8s-master01 examples]# kubectl get sc
NAME              PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   86s
rook-cephfs       rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   33s

定制资源对象文件

[root@k8s-master01 examples]# vim pvc.yaml
[root@k8s-master01 examples]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  namespace: default
  labels:
    app: wordpress
spec:
  storageClassName: rook-ceph-block
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi


[root@k8s-master01 examples]# kubectl apply -f pvc.yaml 
persistentvolumeclaim/mysql-pv-claim created


[root@k8s-master01 examples]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-a30c7b76-6905-4efe-8449-60be4d794bef   20Gi       RWO            rook-ceph-block   25s

3.3、WordPress测试

应用资源对象文件

[root@k8s-master01 examples]# kubectl apply -f mysql.yaml -f wordpress.yaml

获取pvc效果

[root@k8s-master01 examples]# kubectl get pvc -l app=wordpress
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mysql-pv-claim   Bound    pvc-a30c7b76-6905-4efe-8449-60be4d794bef   20Gi       RWO            rook-ceph-block   5m12s
wp-pv-claim      Bound    pvc-36609820-4f42-4ccc-b0e9-1826f6bbb2de   20Gi       RWO            rook-ceph-block   39s

获取pod效果

[root@k8s-master01 examples]# kubectl get svc
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
kubernetes        ClusterIP      10.43.0.1       <none>         443/TCP        44h
wordpress         LoadBalancer   10.43.152.165   192.168.0.17   80:32758/TCP   19m
wordpress-mysql   ClusterIP      None            <none>         3306/TCP       19m

# 查看ip地址,wordpress使用的是LoadBalancer,ip地址为:192.168.0.17

浏览器访问

查看ceph

[root@k8s-master01 examples]# kubectl -n rook-ceph exec -it rook-ceph-tools-5464d6745c-dmr4g -- /bin/bash

bash-4.4$ ceph osd pool ls
.mgr
replicapool
myfs-metadata
myfs-replicated

bash-4.4$ rados ls --pool replicapool
rbd_data.1755cd10886a.0000000000000ff8
rbd_data.1755be2b9325.0000000000000fa0
rbd_data.1755be2b9325.0000000000000001
rbd_data.1755cd10886a.0000000000000a00
rbd_data.1755cd10886a.0000000000000c23
rbd_data.1755cd10886a.0000000000000feb
rbd_data.1755be2b9325.0000000000000600
rbd_data.1755cd10886a.0000000000000a23
rbd_data.1755cd10886a.0000000000000fed
rbd_data.1755cd10886a.0000000000000fee
rbd_data.1755be2b9325.0000000000001008
rbd_data.1755cd10886a.0000000000000ff2
rbd_data.1755be2b9325.00000000000000e0
rbd_data.1755cd10886a.0000000000000ff1
csi.volume.47c4fc60-c93c-4aa0-bbbe-8ae5b0479ff8
rbd_data.1755be2b9325.0000000000000320
rbd_data.1755cd10886a.0000000000000ffe
rbd_data.1755cd10886a.0000000000000fea
rbd_data.1755cd10886a.0000000000000fe5
rbd_data.1755be2b9325.0000000000000c08
rbd_data.1755cd10886a.0000000000000be7
rbd_data.1755be2b9325.0000000000000fe1
rbd_data.1755cd10886a.0000000000000fe3
rbd_data.1755be2b9325.0000000000000808
rbd_data.1755cd10886a.0000000000000c22
rbd_data.1755cd10886a.0000000000000a20
rbd_data.1755be2b9325.0000000000000e00
rbd_data.1755be2b9325.0000000000000024
rbd_data.1755be2b9325.0000000000000820
rbd_data.1755be2b9325.0000000000000023
rbd_data.1755be2b9325.0000000000001022
rbd_data.1755cd10886a.0000000000000be5
rbd_directory
rbd_data.1755be2b9325.0000000000000a20
rbd_data.1755cd10886a.0000000000000fe1
rbd_data.1755cd10886a.0000000000000e00
rbd_data.1755be2b9325.0000000000000009
rbd_data.1755cd10886a.0000000000000fe6
rbd_data.1755cd10886a.0000000000000400
rbd_data.1755be2b9325.0000000000000020
rbd_data.1755cd10886a.0000000000000be4
rbd_id.csi-vol-47c4fc60-c93c-4aa0-bbbe-8ae5b0479ff8
rbd_info
rbd_data.1755cd10886a.0000000000000a21
rbd_data.1755be2b9325.0000000000000fe0
rbd_data.1755be2b9325.0000000000000021
rbd_data.1755be2b9325.0000000000000400
rbd_data.1755cd10886a.0000000000000bec
rbd_header.1755be2b9325
rbd_data.1755be2b9325.0000000000000c21
rbd_data.1755be2b9325.00000000000000a0
rbd_data.1755cd10886a.0000000000000be8
rbd_data.1755cd10886a.0000000000000fef
rbd_data.1755cd10886a.0000000000000fe0
rbd_data.1755cd10886a.0000000000000320
rbd_data.1755cd10886a.0000000000000ff9
rbd_data.1755cd10886a.0000000000000be0
rbd_data.1755cd10886a.0000000000000ff6
rbd_data.1755cd10886a.0000000000000ff3
rbd_data.1755cd10886a.0000000000000bea
rbd_data.1755cd10886a.0000000000000c24
rbd_data.1755be2b9325.0000000000000200
rbd_data.1755be2b9325.0000000000000a00
rbd_data.1755be2b9325.0000000000000025
rbd_data.1755cd10886a.0000000000000be2
rbd_data.1755cd10886a.00000000000000a0
rbd_data.1755cd10886a.0000000000000be1
rbd_data.1755cd10886a.0000000000000ffd
rbd_data.1755cd10886a.0000000000000c20
rbd_data.1755cd10886a.0000000000000fe2
rbd_data.1755be2b9325.0000000000001021
rbd_data.1755cd10886a.0000000000000620
rbd_data.1755be2b9325.0000000000000822
rbd_data.1755cd10886a.0000000000000020
rbd_data.1755be2b9325.0000000000000000
rbd_data.1755be2b9325.0000000000000c22
rbd_data.1755cd10886a.0000000000000be9
rbd_id.csi-vol-9a3007f6-47dd-4e71-9c52-8e7974e7a96b
rbd_data.1755cd10886a.0000000000000600
rbd_data.1755cd10886a.0000000000000001
rbd_data.1755cd10886a.0000000000000c21
rbd_data.1755be2b9325.0000000000000800
rbd_data.1755be2b9325.0000000000000060
rbd_data.1755cd10886a.0000000000000be3
rbd_data.1755be2b9325.0000000000001000
rbd_data.1755cd10886a.0000000000000c08
rbd_header.1755cd10886a
csi.volume.9a3007f6-47dd-4e71-9c52-8e7974e7a96b
rbd_data.1755be2b9325.0000000000000620
rbd_data.1755cd10886a.0000000000001008
rbd_data.1755cd10886a.0000000000000beb
rbd_data.1755be2b9325.0000000000000823
rbd_data.1755be2b9325.0000000000000120
csi.volumes.default
rbd_data.1755cd10886a.0000000000000120
rbd_data.1755cd10886a.0000000000000fe4
rbd_data.1755cd10886a.0000000000000200
rbd_data.1755cd10886a.0000000000000fec
rbd_data.1755cd10886a.0000000000000060
rbd_data.1755cd10886a.0000000000000009
rbd_data.1755cd10886a.0000000000000fa0
rbd_data.1755be2b9325.0000000000000022
rbd_data.1755be2b9325.0000000000001020
rbd_data.1755cd10886a.0000000000000ff7
rbd_data.1755cd10886a.0000000000001208
rbd_data.1755cd10886a.0000000000001000
rbd_data.1755cd10886a.0000000000001200
rbd_data.1755cd10886a.0000000000000ffa
rbd_data.1755cd10886a.0000000000000fe9
rbd_data.1755cd10886a.0000000000000360
rbd_data.1755cd10886a.00000000000000e0
rbd_data.1755cd10886a.0000000000000ff5
rbd_data.1755cd10886a.0000000000000208
rbd_data.1755cd10886a.0000000000000000
rbd_data.1755cd10886a.0000000000000800
rbd_data.1755cd10886a.0000000000000ffb
rbd_data.1755be2b9325.0000000000000c20
rbd_data.1755cd10886a.0000000000000fe7
rbd_data.1755cd10886a.0000000000000fe8
rbd_data.1755cd10886a.0000000000000a22
rbd_data.1755cd10886a.0000000000000ff0
rbd_data.1755be2b9325.0000000000001200
rbd_data.1755cd10886a.0000000000000be6
rbd_data.1755cd10886a.0000000000000ff4
rbd_data.1755be2b9325.0000000000001023
rbd_data.1755be2b9325.0000000000000821
rbd_data.1755cd10886a.0000000000000c00
rbd_data.1755be2b9325.0000000000000a21
rbd_data.1755cd10886a.0000000000000ffc
rbd_data.1755be2b9325.0000000000000c00
rbd_data.1755be2b9325.0000000000000360
rbd_data.1755cd10886a.0000000000000bee
0
k8s
广告 广告

评论区