1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > kubernetes部署 rook ceph

kubernetes部署 rook ceph

时间:2022-05-11 10:25:55

相关推荐

kubernetes部署 rook ceph

环境: centos7.6, kubernetes 1.15.3, rook 1.3.4

部署 rook ceph

1、部署 rook ceph

官网下载 rook、解压后,cd rook-1.3.4/cluster/examples/kubernetes/ceph

部署 crd

kb apply -f common.yaml

部署 operator

kb apply -f operator.yaml

修改 cluster.yaml,主要修改 useAllNodes: false,useAllDevices: falsenodes ,nodes

...storage: # cluster level storage configuration and selectionuseAllNodes: falseuseAllDevices: false...cluster level config# storeType: filestore# - name: "172.17.4.301"#deviceFilter: "^sd."nodes:- name: "10.1.1.160"devices:- name: "sdc" #ceph osd 使用的磁盘名称- name: "sdd"- name: "sde"- name: "sdf"- name: "sdg"- name: "sdh"resources:limits:cpu: "5000m"memory: "12288Mi"requests:cpu: "3000m"memory: "6144Mi"config:metadataDevice: "sdb" # osd 缓存使用的 ssd 硬盘,可以没有- name: "10.1.1.161"devices:- name: "sdc"- name: "sdd"- name: "sde"- name: "sdf"- name: "sdg"- name: "sdh"resources:limits:cpu: "5000m"memory: "12288Mi"requests:cpu: "3000m"memory: "6144Mi"config:metadataDevice: "sdb"...

部署 cephcluster

kb apply -f cluster.yaml

[root@k8sGUPMaster01 ceph]# kb get po -n rook-cephNAME READY STATUSRESTARTS AGEcsi-cephfsplugin-829m7 3/3Running03h37mcsi-cephfsplugin-lv9dv 3/3Running03h37mcsi-cephfsplugin-provisioner-6ddffd9ddd-hs4kj 5/5Running03h37m...

如果需要覆盖之前的 rook 部署,可以先执行下面的清理操作:

2、清除使用 pvc、pv、ceph-fs 的 pod,删除对应的 pvc、pv。删除 rgw 创建的 configmap 和 secret(后面可能无法删除)

3、清除 k8s 集群中的 rook 部署

注意修改 cd /root/rook-master/cluster/examples/kubernetes/ceph

[root@k8sGUPMaster01 ceph]# cat /root/rook-master/cluster/examples/kubernetes/ceph/rook-destroy.sh #!/bin/shcd /root/rook-master/cluster/examples/kubernetes/cephkubectl delete -n rook-ceph cephblockpool replicapoolkubectl delete storageclass rook-ceph-blockkubectl delete -f csi/cephfs/kube-registry.yamlkubectl delete storageclass csi-cephfskubectl -n rook-ceph delete cephcluster rook-cephkubectl delete -f operator.yamlkubectl delete -f common.yaml

kubectl delete -f common.yaml 可能会执行到最后卡住,可以 ctrl c 后重新执行 kubectl delete -f common.yaml,简单测试可行,副作用未知。

4、在所有的 osd 节点执行清理工作,可以使用 ansible

copy 脚本到每个 osd 节点

ansible nodes -i inventory/*** -m copy -a "src=zap-disk.sh dest=/tmp/zap-disk.sh"

每个 osd 节点都执行 sh /tmp/zap-disk.sh 清理磁盘和配置文件

ansible nodes -i inventory/*** -m shell -a "sh /tmp/zap-disk.sh" --become

/tmp/zap-disk.sh 脚本内容如下:

[root@GPU01 ~]# cat /tmp/zap-disk.sh #!/usr/bin/env bashi=1while [ $i -lt 8 ] #8 块存储磁盘doj=`echo $i|awk '{printf "%c",97+$i}'`#echo $jDISK="/dev/sd$j"sgdisk --zap-all $DISKdd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsynci=$(($i+1))done# These steps only have to be run once on each node# If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks.ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %# ceph-volume setup can leave ceph-<UUID> directories in /dev (unnecessary clutter)rm -rf /dev/ceph-*rm -rf /var/lib/rook

注意事项:

如果 k8s 集群出现网络问题,会导致 rook 安装失败 。

参考文章:

Cleaning up a Cluster

部署 rbd storageclass

k8s >=1.13

[root@k8s01 ceph]# cat storageclass.yaml apiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata:name: replicapoolnamespace: rook-cephspec:failureDomain: hostreplicated:size: 3---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: rook-ceph-block# Change "rook-ceph" provisioner prefix to match the operator namespace if neededprovisioner: rook-ceph.rbd.parameters:# clusterID is the namespace where the rook cluster is runningclusterID: rook-ceph# Ceph pool into which the RBD image shall be createdpool: replicapool# RBD image format. Defaults to "2".imageFormat: "2"# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.imageFeatures: layering# The secrets contain Ceph admin credentials.csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisionercsi.storage.k8s.io/provisioner-secret-namespace: rook-cephcsi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-nodecsi.storage.k8s.io/node-stage-secret-namespace: rook-ceph# Specify the filesystem type of the volume. If not specified, csi-provisioner# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock# in hyperconverged settings where the volume is mounted on the same node as the osds.csi.storage.k8s.io/fstype: ext4# Delete the rbd volume when a PVC is deletedreclaimPolicy: Delete

创建 storageclass

cd /root/rook-1.3.4/cluster/examples/kubernetes/ceph/csi/rbdkb create -f storageclass.yaml

k8s<=1.12

[root@k8s01 ceph]# cat /root/rook-1.3.4/cluster/examples/kubernetes/ceph/flex/storageclass.yaml ################################################################################################################## Create a storage class with a pool that sets replication for a production environment.# A minimum of 3 nodes with OSDs are required in this example since the default failureDomain is host.# kubectl create -f storageclass.yaml#################################################################################################################apiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata:name: replicapoolnamespace: rook-cephspec:replicated:size: 3---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: rook-ceph-blockprovisioner: ceph.rook.io/block# Works for Kubernetes 1.14+allowVolumeExpansion: trueparameters:blockPool: replicapool# Specify the namespace of the rook cluster from which to create volumes.# If not specified, it will use `rook` as the default namespace of the cluster.# This is also the namespace where the cluster will beclusterNamespace: rook-ceph# Specify the filesystem type of the volume. If not specified, it will use `ext4`.fstype: xfs# (Optional) Specify an existing Ceph user that will be used for mounting storage with this StorageClass.#mountUser: user1# (Optional) Specify an existing Kubernetes secret name containing just one key holding the Ceph user secret.# The secret must exist in each namespace(s) where the storage will be consumed.#mountSecret: ceph-user1-secret

创建 storageclass

cd /root/rook-1.3.4/cluster/examples/kubernetes/ceph/flex/kb create -f storageclass.yaml

测试 rbd storageclass

部署 mysql 、wordpress

cd /root/rook-1.3.4/cluster/examples/kuberneteskb apply -f mysql.yamlkb apply -f wordpress.yaml

清理环境

删除 rbd storageclass 需要慎重,确保数据安全

kubectl delete -f wordpress.yamlkubectl delete -f mysql.yamlkubectl delete -n rook-ceph cephblockpools.ceph.rook.io replicapoolkubectl delete storageclass rook-ceph-block

部署 ceph 对象存储网关

1、修改 object.yaml 中 name: gpu-store

apiVersion: ceph.rook.io/v1kind: CephObjectStoremetadata:name: gpu-storenamespace: rook-cephspec:metadataPool:failureDomain: hostreplicated:size: 3dataPool:failureDomain: hosterasureCoded:dataChunks: 2codingChunks: 1preservePoolsOnDelete: truegateway:type: s3sslCertificateRef:port: 80securePort:instances: 1

kubectl create -f object.yaml

执行完 CephObjectStore 后,operator 会创建必要的 pools 等资源来启动 rgw 服务

2、创建 Bucket(非必须)

修改storageclass-bucket-delete.yaml

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: rook-ceph-bucketprovisioner: ceph.rook.io/bucketreclaimPolicy: Deleteparameters:objectStoreName: gpu-storeobjectStoreNamespace: rook-cephregion: us-east-1

kubectl create -f storageclass-bucket-delete.yaml

修改object-bucket-claim-delete.yaml,创建 object bucket claim(obc),没创建一个 obc ,ceph 将会创建一个新的 bucket

apiVersion: objectbucket.io/v1alpha1kind: ObjectBucketClaimmetadata:name: ceph-bucketspec:generateBucketName: ceph-bktstorageClassName: rook-ceph-bucket

kubectl create -f object-bucket-claim-delete.yaml

3、使用 s3cmd 测试 object 存储

export AWS_HOST=$(kubectl -n default get cm ceph-bucket -o yaml | grep BUCKET_HOST | awk '{print $2}')export AWS_ACCESS_KEY_ID=$(kubectl -n default get secret ceph-bucket -o yaml | grep AWS_ACCESS_KEY_ID | awk '{print $2}' | base64 --decode)export AWS_SECRET_ACCESS_KEY=$(kubectl -n default get secret ceph-bucket -o yaml | grep AWS_SECRET_ACCESS_KEY | awk '{print $2}' | base64 --decode)export AWS_ENDPOINT=$AWS_HOST:80

上传对象

echo "Hello Rook" > /tmp/rookObjs3cmd put /tmp/rookObj --no-ssl --host=${AWS_HOST} --host-bucket= s3://rookbucket

下载对象

s3cmd get s3://rookbucket/rookObj /tmp/rookObj-download --no-ssl --host=${AWS_HOST} --host-bucket=cat /tmp/rookObj-download

注意:

k8s 集群中如果资源删除不了,也不要直接在 etcd 中删除

参考文章:

Object Storage

集群外部使用对象存储

修改 rgw-external.yaml ,rook_object_store: gpu-store

[root@k8s01 ceph]# cat rgw-external.yaml apiVersion: v1kind: Servicemetadata:name: rook-ceph-rgw-my-store-externalnamespace: rook-cephlabels:app: rook-ceph-rgwrook_cluster: rook-cephrook_object_store: gpu-storespec:ports:- name: rgwport: 80protocol: TCPtargetPort: 8080selector:app: rook-ceph-rgwrook_cluster: rook-cephrook_object_store: gpu-storesessionAffinity: Nonetype: NodePort

部署 nodeport 类型的 svc

kb create -f rgw-external.yaml

创建对象存储 user

修改 object-user.yaml, store: gpu-store

[root@k8s01 ceph]# cat object-user.yaml################################################################################################################## Create an object store user for access to the s3 endpoint.# kubectl create -f object-user.yaml#################################################################################################################apiVersion: ceph.rook.io/v1kind: CephObjectStoreUsermetadata:name: uat-usernamespace: rook-cephspec:store: gpu-storedisplayName: "auth by uat"

部署 CephObjectStoreUser, 创建完成后,rook operator 会自动创建对应CephObjectStore rgw user

kubectl create -f object-user.yaml

查看 rgw user accesskey 和 secretkey

kubectl -n rook-ceph get secret rook-ceph-object-user-gpu-store-uat-user -o yaml | grep AccessKey | awk '{print $2}' | base64 --decodekubectl -n rook-ceph get secret rook-ceph-object-user-gpu-store-uat-user -o yaml | grep SecretKey | awk '{print $2}' | base64 --decode

登陆 toolbox 使用 ceph 命令

部署 toolbox pod

[root@k8s01 ceph]# kb apply -f toolbox.yaml deployment.apps/rook-ceph-tools created[root@k8s01 ceph]# pwd/root/rook-1.3.4/cluster/examples/kubernetes/ceph

登陆 toolbox

[root@k8s01 ceph]# kb -n rook-ceph exec -it $(kubectl -n rook-ceph get po -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bashbash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directorybash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directorybash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directorybash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directorybash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory[root@rook-ceph-tools-68b66b77db-jtb4q /]# ceph -s

ceph dashboard

开启 nodeport

cd /root/rook-1.3.4/cluster/examples/kubernetes/cephkb apply -f dashboard-external-https.yaml

登陆密码

kb get secret rook-ceph-dashboard-password -n rook-ceph -o jsonpath='{.data.password}' | base64 -d

dashborad 显示对象存储网关

rook 1.3.4 官方文档无法实现

快捷键:换行(Shift+回车), 发送(Ctrl+回车或回车)

发送

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。