六、K8S 使用Ceph存储

一、PV、PVC概述

管理存储是管理计算的一个明显问题。PersistentVolume子系统为用户和管理员提供了一个API,用于抽象如何根据消费方式提供存储的详细信息。于是引入了两个新的API资源:PersistentVolume和PersistentVolumeClaim >PersistentVolume(PV)是集群中已由管理员配置的一段网络存储。 集群中的资源就像一个节点是一个集群资源。 PV是诸如卷之类的卷插件,但是具有独立于使用PV的任何单个pod的生命周期。 该API对象包含存储的实现细节,即NFS,iSCSI或云提供商特定的存储系统。

PersistentVolumeClaim(PVC)是用户存储的请求。 它类似于pod。Pod消耗节点资源,PVC消耗存储资源。 pod可以请求特定级别的资源(CPU和内存)。 权限要求可以请求特定的大小和访问模式。

虽然PersistentVolumeClaims允许用户使用抽象存储资源,但是常见的是,用户需要具有不同属性(如性能)的PersistentVolumes,用于不同的问题。 管理员需要能够提供多种不同于PersistentVolumes,而不仅仅是大小和访问模式,而不会使用户了解这些卷的实现细节。 对于这些需求,存在StorageClass资源。

StorageClass为集群提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。 Kubernetes本身对于什么类别代表是不言而喻的。 这个概念有时在其他存储系统中称为“配置文件”

二、POD动态供给

动态供给主要是能够自动帮你创建pv,需要多大的空间就创建多大的pv。k8s帮助创建pv,创建pvc就直接api调用存储类来寻找pv。

如果是存储静态供给的话,会需要我们手动去创建pv,如果没有足够的资源,找不到合适的pv,那么pod就会处于pending等待的状态。而动态供给主要的一个实现就是StorageClass存储对象,其实它就是声明你使用哪个存储,然后帮你去连接,再帮你去自动创建pv。

三、POD使用RBD做为持久数据卷

3.1、安装与配置

RBD支持ReadWriteOnce,ReadOnlyMany两种模式

3.1.1、配置rbd-provisioner

# 如果使用kubeadm部署的集群需要这些额外的步骤 # 由于使用动态存储时 controller-manager 需要使用 rbd 命令创建 image # 所以 controller-manager 需要使用 rbd 命令 # 由于官方controller-manager镜像里没有rbd命令 # 如果没使用如下方式会报错无法成功创建pvc # 相关 issue https://github.com/kubernetes/kubernetes/issues/38923

cat >external-storage-rbd-provisioner.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: kube-system

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: rbd-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:v2.0.0-k8s1.11"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner
EOF

# kubectl apply -f external-storage-rbd-provisioner.yaml

# 查看状态 等待running之后 再进行后续的操作
# kubectl get pod -n kube-system
3.1.2、配置storageclass
1、创建pod时,kubelet需要使用rbd命令去检测和挂载pv对应的ceph image,所以要在所有的worker节点安装ceph客户端ceph-common。
将cephceph.client.admin.keyringceph.conf文件拷贝到master的/etc/ceph目录下
yum -y install ceph-common

如果K8S中是没有ceph客户端和配置文件,需要从ceph集群中copy下:
scp /etc/ceph/ceph.c* root@192.168.171.11:/etc/ceph/
scp /etc/ceph/ceph.c* root@192.168.171.12:/etc/ceph/
scp /etc/ceph/ceph.c* root@192.168.171.13:/etc/ceph/

这样就可以在K8S集群中查看集群的状态了!

2、创建 osd poolcephmon或者admin节点
ceph osd pool create kube 128 128 
ceph osd pool ls

3、创建k8s访问ceph的用户 在cephmon或者admin节点
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

4、查看keycephmon或者admin节点
ceph auth get-key client.admin
ceph auth get-key client.kube

5# 创建 admin secret # CEPH_ADMIN_SECRET 替换为 client.admin 获取到的key
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQCeEwpeo+I8HRAAnBphr8lyGc6+JBT7jU7rgA== \
--namespace=kube-system

6# 在 default 命名空间创建pvc用于访问ceph的 secret # CEPH_KUBE_SECRET 替换为 client.kube 获取到的key
kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQC3OhNeYrGQLRAA8Xd/e1NUto/fXnGEk6hVMg== \
--namespace=default


# 查看 secret
kubectl get secret ceph-user-secret -o yaml
kubectl get secret ceph-secret -n kube-system -o yaml

3.1.3、配置StorageClass

cat >storageclass-ceph-rdb.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-ceph-rdb
provisioner: ceph.com/rbd
parameters:
  monitors: 192.168.171.135:6789,192.168.171.136:6789,192.168.171.137:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-user-secret
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"
EOF
3.1.4、创建yaml
kubectl apply -f storageclass-ceph-rdb.yaml
3.1.5、查看sc
kubectl get sc
### 四、测试使用 1、创建pvc测试
cat >ceph-rdb-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-rdb-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-ceph-rdb
  resources:
    requests:
      storage: 2Gi
EOF

kubectl apply -f ceph-rdb-pvc-test.yaml
2、查看
[root@k8s-master1 ~]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
ceph-rdb-claim   Bound    pvc-e5f13194-67db-4d98-b69c-5a4272c2498d   2Gi        RWO            dynamic-ceph-rdb   7m10s

[root@k8s-master1 ~]# kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                      STORAGECLASS          REASON   AGE
pvc-e5f13194-67db-4d98-b69c-5a4272c2498d   2Gi        RWO            Delete           Bound         default/ceph-rdb-claim                     dynamic-ceph-rdb       48s
3、创建 nginx pod 挂载测试
cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod1
  labels:
    name: nginx-pod1
spec:
  containers:
  - name: nginx-pod1
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: ceph-rdb
      mountPath: /usr/share/nginx/html
  volumes:
  - name: ceph-rdb
    persistentVolumeClaim:
      claimName: ceph-rdb-claim
EOF

kubectl apply -f nginx-pod.yaml
4、查看
kubectl get pods -o wide
5、修改文件内容
kubectl exec -ti nginx-pod1 -- /bin/sh -c 'echo Hello World from Ceph RBD!!! > /usr/share/nginx/html/index.html' # 访问测试
6、访问测试
POD_ID=$(kubectl get pods -o wide | grep nginx-pod1 | awk '{print $6}')

curl http://$POD_ID #测试
7、清理
kubectl delete -f nginx-pod.yaml
kubectl delete -f ceph-rdb-pvc-test.yaml


五、POD使用CephFS做为持久数据卷

CephFS方式支持k8s的pv的3种访问模式ReadWriteOnce,ReadOnlyMany ,ReadWriteMany

5.1、Ceph端创建CephFS pool

1、如下操作在ceph的mon或者admin节点 CephFS需要使用两个Pool来分别存储数据和元数据

ceph osd pool create fs_data 128
ceph osd pool create fs_metadata 128
ceph osd lspools
2、创建一个CephFS
ceph fs new cephfs fs_metadata fs_data
3、查看
ceph fs ls

5.2、部署 cephfs-provisioner

1、使用社区提供的cephfs-provisioner

cat >external-storage-cephfs-provisioner.yaml<<EOF
apiVersion: v1
kind: Namespace
metadata:
   name: cephfs
   labels:
     name: cephfs
 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: cephfs
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
 
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "create", "delete"]
  - apiGroups: ["policy"]
    resourceNames: ["cephfs-provisioner"]
    resources: ["podsecuritypolicies"]
    verbs: ["use"]
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: cephfs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
 
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: cephfs
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: cephfs
spec:
  selector:
    matchLabels:
      app: cephfs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation=true"
      serviceAccount: cephfs-provisioner
EOF

kubectl apply -f external-storage-cephfs-provisioner.yaml
2、查看状态 等待running之后 再进行后续的操作
kubectl get pod -n cephfs

5.3、配置 storageclass

1、查看key 在ceph的mon或者admin节点

ceph auth get-key client.admin
2、# 创建 admin secret # CEPH_ADMIN_SECRET 替换为 client.admin 获取到的key # 如果在测试 ceph rbd 方式已经添加 可以略过此步骤
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=AQCeEwpeo+I8HRAAnBphr8lyGc6+JBT7jU7rgA== \
--namespace=kube-system
3、查看 secret
kubectl get secret ceph-secret -n kube-system -o yaml
4、配置 StorageClass
cat >storageclass-cephfs.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 192.168.171.135:6789,192.168.171.136:6789,192.168.171.137:6789
    adminId: admin
    adminSecretName: ceph-secret
    adminSecretNamespace: "kube-system"
    claimRoot: /volumes/kubernetes
EOF
5、创建
kubectl apply -f storageclass-cephfs.yaml
6、查看
kubectl get sc
#### 5.4、测试使用 1、创建pvc测试
cat >cephfs-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-cephfs
  resources:
    requests:
      storage: 2Gi
EOF

kubectl apply -f cephfs-pvc-test.yaml
2、查看
[root@k8s-master1 ceph-all]# kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
cephfs-claim   Bound    pvc-50ebdaab-c6ad-47ad-86cb-149327481a67   2Gi        RWO            dynamic-cephfs   4s
[root@k8s-master1 ceph-all]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                    STORAGECLASS       REASON   AGE
pvc-50ebdaab-c6ad-47ad-86cb-149327481a67   2Gi        RWO            Delete           Bound      default/cephfs-claim     dynamic-cephfs              6s
3、创建 nginx pod 挂载测试
cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod2
  labels:
    name: nginx-pod2
spec:
  containers:
  - name: nginx-pod2
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: cephfs
      mountPath: /usr/share/nginx/html
  volumes:
  - name: cephfs
    persistentVolumeClaim:
      claimName: cephfs-claim
EOF

kubectl apply -f nginx-pod.yaml
4、查看
kubectl get pods -o wide
5、修改文件内容
kubectl exec -ti nginx-pod2 -- /bin/sh -c 'echo Hello World from CephFS!!! > /usr/share/nginx/html/index.html' # 访问测试
6、访问pod测试
POD_ID=$(kubectl get pods -o wide | grep nginx-pod2 | awk '{print $6}')
curl http://$POD_ID
7、清理
kubectl delete -f nginx-pod.yaml
kubectl delete -f cephfs-pvc-test.yaml


本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!