Ceph persistent storage with Kubernetes using Cephfs

Ranch
You can support us by downloading this article in PDF format via the link below.

Download the guide as a PDF

turn off
Ranch

Ranch
Ranch

In the previous tutorial, we discussed how Kubernetes persistent storage using Ceph RBD. As promised, this article will focus on configuring Kubernetes to use an external Ceph Ceph file system to store persistent data for applications running on a Kubernetes container environment.

If you are not familiar with Ceph but the Ceph cluster you are running is the Ceph file system,CephFS)Is a POSIX-compatible file system built on Ceph’s distributed object storage, Rados. CephFS is designed to provide highly available, versatile, and high-performance file storage for a variety of applications.

This tutorial will not delve into the concepts of Kubernetes and Ceph. It serves as a simple step-by-step guide to configuring Ceph and Kubernetes to ensure that you can use Cephfs to automatically configure persistent volumes on the Ceph backend. So follow these steps to get started.

Ceph persistent storage with Kubernetes using Cephfs

Before starting this exercise, you should have a working external environment.
Ceph cluster. Most Kubernetes deployments using Ceph will involve using
car. This guide assumes that you have deployed a Ceph storage cluster Ceph Ansible, Ceph deployment Or manually.

How to install Ceph Storage Cluster on Ubuntu 18.04 LTS

We will update the link along with other guides for installing Ceph on other Linux distributions.

Step 1: Deploy Cephfs Provisioner on Kubernetes

Log in to your Kubernetes cluster and create a manifest file
Deploy an RBD vendor, which is a dynamic out-of-tree vendor
Available for Kubernetes 1.5+.

$ vim cephfs-provisioner.yml

Add the following to the file. Please note that our deployment uses RBAC, so we will create cluster roles and bindings before creating the service account and deploying the Cephfs setup program.

---
kind: Namespace
apiVersion: v1
metadata:
  name: cephfs

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: cephfs
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: cephfs
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: cephfs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: cephfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: cephfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cephfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: cephfs
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
      serviceAccount: cephfs-provisioner

Application checklist:

$ kubectl apply -f cephfs-provisioner.yml
namespace/cephfs created
clusterrole.rbac.authorization.k8s.io/cephfs-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
role.rbac.authorization.k8s.io/cephfs-provisioner created
rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
serviceaccount/cephfs-provisioner created
deployment.apps/cephfs-provisioner created

Confirm that the Cephfs volume configurator container is running.

$ kubectl get pods -l app=cephfs-provisioner -n cephfs
NAME                                  READY   STATUS    RESTARTS   AGE
cephfs-provisioner-7b77478cb8-7nnxs   1/1     Running   0          84s

Step 2: Obtain the Ceph Administrator Key and Create Secret on Kubernetes

Log in to your Ceph cluster and obtain a management key for use by RBD vendors.

$ sudo ceph auth get-key client.admin

Save the value of the admin user key printed by the above command. We will add the key as a secret in Kubernetes.

$ kubectl create secret generic ceph-admin-secret 
    --from-literal=key='' 
    --namespace=cephfs

Where Is your Ceph administrator key. You can confirm the creation with the following command.

$ kubectl get secrets ceph-admin-secret -n cephfs
NAME                TYPE     DATA   AGE
ceph-admin-secret   Opaque   1      6s

Step 3: Create a Ceph pool for Kubernetes and client keys

Ceph file system needs At least two RADOS pools: For each other:

  • data
  • Metadata

Generally, a metadata pool has a maximum of a few GB of data. In fact, large clusters usually use 64 or 128. Therefore, it is generally recommended to use fewer PGs.

Let’s create the Ceph OSD pool for Kubernetes:

$ sudo ceph osd pool create cephfs_data 128 128
$ sudo ceph osd pool create cephfs_metadata 64 64

Create the ceph file system on the pool:

$ ceph fs new   
$ ceph fs new cephfs cephfs_data  cephfs_metadata 

Confirm the Ceph file system is created:

$ sudo ceph fs ls
 name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

UI dashboard confirmation:

Step 4: Create Cephfs Storage Class on Kubernetes

StorageClass gives you a way to describe the storage “classes” provided in Kubernetes. We will create a file called Cephalosporin.

$ vim cephfs-sc.yml

What to add to the file:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cephfs
  namespace: cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: monitors: 10.10.10.11:6789,10.10.10.12:6789,10.10.10.13:6789
    adminId: admin
    adminSecretName: ceph-admin-secret
    adminSecretNamespace: cephfs
    claimRoot: /pvc-volumes

where:

  • Cephalosporin Is the name of the StorageClass to be created.
  • 10.10.10.11, 10.10.10.12, and 10.10.10.13 Is the IP address of Ceph Monitors. You can list them using:
$ sudo ceph -s
  cluster:
    id:     7795990b-7c8c-43f4-b648-d284ef2a0aba
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum cephmon01,cephmon02,cephmon03 (age 32h)
    mgr: cephmon01(active, since 30h), standbys: cephmon02
    mds: cephfs:1 {0=cephmon01=up:active} 1 up:standby
    osd: 9 osds: 9 up (since 32h), 9 in (since 32h)
    rgw: 3 daemons active (cephmon01, cephmon02, cephmon03)
 
  data:
    pools:   8 pools, 618 pgs
    objects: 250 objects, 76 KiB
    usage:   9.6 GiB used, 2.6 TiB / 2.6 TiB avail
    pgs:     618 active+clean

After modifying the file with the correct Ceph monitor values, create a StorageClass using the kubectl command.

$ kubectl apply -f cephfs-sc.yml 
storageclass.storage.k8s.io/cephfs created

List available StorageClasses:

$ kubectl get sc
NAME       PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-rbd   ceph.com/rbd      Delete          Immediate           false                  25h
cephfs     ceph.com/cephfs   Delete          Immediate           false                  2m23s

Step 5: Create Test Claims and Pods on Kubernetes

To confirm that everything works, let’s create a test persistent batch declaration.

$ vim cephfs-claim.yml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim1
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: cephfs
  resources:
    requests:
      storage: 1Gi

Application manifest file.

$ kubectl  apply -f cephfs-claim.yml
persistentvolumeclaim/cephfs-claim1 created

If the binding is successful, it should show boundary status.

$  kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-rbd-claim1   Bound    pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304   1Gi        RWO            ceph-rbd       25h
cephfs-claim1     Bound    pvc-1bfa81b6-2c0b-47fa-9656-92dc52f69c52   1Gi        RWO            cephfs         87s

We can then deploy the test pod using the created claim. First create a file to hold the data:

$ vim cephfs-test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: pvc
      persistentVolumeClaim:
        claimName: claim1

Create a podcast:

$ kubectl apply -f cephfs-test-pod.yaml
pod/test-pod created

Confirm Pod is running:

$ kubectl get  pods test-pod
NAME              READY   STATUS    RESTARTS   AGE
test-pod   0/1     Completed   0          2m28s

Enjoy persistent volume configuration with Cephfs on Kubernetes.

Similar guides:

Kubernetes persistent storage using Ceph RBD

How to configure Kubernetes dynamic volume configuration with Heketi and GlusterFS

Ranch
You can support us by downloading this article in PDF format via the link below.

Download the guide as a PDF

turn off
Ranch

Ranch
Ranch

Related Posts