Kubernetes persistent storage using Ceph RBD

Ranch
You can download this article in PDF format to support us through the following link.

Download the guide as a PDF

turn off
Ranch

Ranch
Ranch

How to use Ceph RBD for Kubernetes Dynamic persistent volume supply? Kubernetes (K8s) is an open source system for automating the deployment, expansion, and management of containerized applications. One of the key requirements for deploying stateful applications in Kubernetes is data persistence. In this tutorial, we will study how to create a storage class on Kubernetes that uses RBD (Ceph Block Device) to provision persistent volumes from an external Ceph cluster.

Ceph block devices are thinly configured, resizable, and store striped data on multiple OSDs in the Ceph cluster. Ceph block devices utilize RADOS features such as snapshots, replication and consistency. Ceph’s RADOS block device (RBD) uses kernel modules or librbd libraries to interact with the OSD.

Before starting this exercise, you should have a usable external Ceph cluster. Most Kubernetes deployments using Ceph will involve the use of Rook. This guide assumes that you have deployed a Ceph storage cluster Ceph Ansible, Ceph deployment Or manually.

How to install Ceph Storage Cluster on Ubuntu 18.04 LTS

Step 1: Deploy Ceph Provisioner on Kubernetes

Log in to your Kubernetes cluster and create a manifest file to deploy the RBD supplier, which is a dynamic supplier outside the tree for Kubernetes 1.5+.

$ vim ceph-rbd-provisioner.yml

Add the following to the file. Please note that our deployment uses RBAC, so we will create cluster roles and bindings before creating service accounts and deploying Ceph RBD vendors.

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: kube-system

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "quay.io/external_storage/rbd-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      serviceAccount: rbd-provisioner

Apply files to create resources.

$ kubectl apply -f ceph-rbd-provisioner.yml
clusterrole.rbac.authorization.k8s.io/rbd-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created
role.rbac.authorization.k8s.io/rbd-provisioner created
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created
deployment.apps/rbd-provisioner created

Confirm that the RBD volume provisioner pod is running.

$ kubectl get pods -l app=rbd-provisioner -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
rbd-provisioner-75b85f85bd-p9b8c   1/1     Running   0          3m45s

Step 2: Obtain the Ceph Administrator Key and Create Secret on Kubernetes

Log in to your Ceph cluster and obtain a management key for use by RBD vendors.

$ sudo ceph auth get-key client.admin

Save the value of the admin user key printed out by the above command. We will add the key as a secret in Kubernetes.

$ kubectl create secret generic ceph-admin-secret 
    --type="kubernetes.io/rbd" 
    --from-literal=key='' 
    --namespace=kube-system

Where Is your ceph administrator key. You can confirm the creation with the following command.

$ kubectl get secrets ceph-admin-secret -n kube-system 
NAME                TYPE                DATA   AGE
ceph-admin-secret   kubernetes.io/rbd   1      5m

Step 3: Create a Ceph pool for Kubernetes and client keys

The next step is to create a new Ceph pool for Kubernetes.

$ sudo ceph ceph osd pool create  

# Example
$ sudo ceph ceph osd pool create k8s 100

For more details, please check our guide: Creating a Pool in a Ceph Storage Cluster

Then create a new client key that can access the created pool.

$ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool='

# Example
$ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=k8s'

Where 8 seconds Is the name of the pool created in Ceph.

You can then associate the pool with the application and initialize it.

sudo ceph osd pool application enable  rbd
sudo rbd pool init 

Get the client key on Ceph.

$ sudo ceph auth get-key client.kube

Create client password on Kubernetes

kubectl create secret generic ceph-k8s-secret 
  --type="kubernetes.io/rbd" 
  --from-literal=key='' 
  --namespace=kube-system

Where Is your Ceph customer key.

Step 4: Create RBD storage class

StorageClass gives you a way to describe the storage “classes” provided in Kubernetes. We will create a storage class called ceph-rbd.

$ vim ceph-rbd-sc.yml

What to add to the file:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ceph-rbd
provisioner: ceph.com/rbd
parameters:
  monitors: 10.10.10.11:6789, 10.10.10.12:6789, 10.10.10.13:6789
  pool: k8s-uat
  adminId: admin
  adminSecretNamespace: kube-system
  adminSecretName: ceph-admin-secret
  userId: kube
  userSecretNamespace: kube-system
  userSecretName: ceph-k8s-secret
  imageFormat: "2"
  imageFeatures: layering

where:

  • Cephalosporins Is the name of the StorageClass to be created.
  • 10.10.10.11, 10.10.10.12 and 10.10.10.13 Is the IP address of Ceph Monitors. You can list them using the following command:
$ sudo ceph -s
  cluster:
    id:     7795990b-7c8c-43f4-b648-d284ef2a0aba
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum cephmon01,cephmon02,cephmon03 (age 32h)
    mgr: cephmon01(active, since 30h), standbys: cephmon02
    mds: cephfs:1 {0=cephmon01=up:active} 1 up:standby
    osd: 9 osds: 9 up (since 32h), 9 in (since 32h)
    rgw: 3 daemons active (cephmon01, cephmon02, cephmon03)
 
  data:
    pools:   8 pools, 618 pgs
    objects: 250 objects, 76 KiB
    usage:   9.6 GiB used, 2.6 TiB / 2.6 TiB avail
    pgs:     618 active+clean

After modifying the file with the correct Ceph monitor values, apply config:

$ kubectl apply -f ceph-rbd-sc.yml
storageclass.storage.k8s.io/ceph-rbd created

List available StorageClasses:

kubectl get sc
NAME       PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-rbd   ceph.com/rbd      Delete          Immediate           false                  17s
cephfs     ceph.com/cephfs   Delete          Immediate           false                  18d

Step 5: Create Test Claims and Pods on Kubernetes

To confirm that everything is working, let’s create a batch statement for testing durability.

$ vim ceph-rbd-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-rbd-claim1
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ceph-rbd
  resources:
    requests:
      storage: 1Gi

Use manifest files to create claims.

$ kubectl apply -f ceph-rbd-claim.yml
persistentvolumeclaim/ceph-rbd-claim1 created

If the binding is successful, it should display boundary status.

$ kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-rbd-claim1   Bound    pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304   1Gi        RWO            ceph-rbd       43s

Not bad! . We can create dynamic “persistent volume claims” on the Ceph RBD backend. Please note that we do not have to manually create a permanent volume before making a claim. How cool? ..

We can then use the created statement to deploy the test pod. First create a file to hold the data:

$ vim rbd-test-pod.yaml

plus:

---
kind: Pod
apiVersion: v1
metadata:
  name: rbd-test-pod
spec:
  containers:
  - name: rbd-test-pod
    image: busybox
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/RBD-SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: pvc
      persistentVolumeClaim:
        claimName: ceph-rbd-claim1 

Create an ad stream:

$ kubectl apply -f rbd-test-pod.yaml
pod/rbd-test-pod created

If you are describing a Pod, you can see the successful attachment of the volume.

$ kubectl describe pod rbd-test-pod
.....
vents:
  Type    Reason                  Age        From                     Message
  ----    ------                  ----       ----                     -------
  Normal  Scheduled                 default-scheduler        Successfully assigned default/rbd-test-pod to rke-worker-02
  Normal  SuccessfulAttachVolume  3s         attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304"

If you have a Ceph dashboard, you can see the new block image created.

Our next guide will show how to use the Ceph file system for dynamic persistent volume configuration on Kubernetes.

label:

  • Use Ceph RBD on Kubernetes
  • Kubernetes dynamic storage configuration using Ceph RBD
  • Use external Ceph on Kubernetes
  • Kubernetes and Ceph RBD

Similar guidelines:

How to configure Kubernetes dynamic volume configuration with Heketi and GlusterFS

Set up GlusterFS storage with Heketi on CentOS 8 / CentOS 7

The best storage solution for Kubernetes and Docker containers

Ranch
You can download this article in PDF format to support us through the following link.

Download the guide as a PDF

turn off
Ranch

Ranch
Ranch

Related Posts