EKS Kubernetes persistent storage using EFS storage service

[*]

/*Custom CSS */
.tdi_3_232.td-a-rec {
Text alignment: center;
} .tdi_3_232 .td-element-style {
z index: -1;
} .tdi_3_232.td-a-rec-img {
Text alignment: left;
} .tdi_3_232.td-a-rec-img img {
Margin: 0 automatically 0 0;
} @media (Maximum width: 767 pixels){
.tdi_3_232.td-a-rec-img {
Text alignment: center;
}
}

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

In this tutorial, we will discuss how to configure EKS persistent storage for your Kubernetes cluster using EFS Amazon services. The storage backend service we will use is EFS, which is the default persistent storage we use for batch claims used by stateful applications. StorageClass provides a way for administrators to describe the storage “classes” they provide to allow dynamic configuration of persistent volumes.

In Kubernetes, PersistentVolume (PV) is a piece of storage in the cluster, and PersistentVolumeClaim (PVC) is a user (usually Pod) request for storage. You need a valid EKS cluster before you can use this guide to set up persistent storage for your containerized workloads.

Set up prerequisites:

[*]

/*Custom CSS */
.tdi_2_822.td-a-rec {
Text alignment: center;
} .tdi_2_822 .td-element-style {
z index: -1;
} .tdi_2_822.td-a-rec-img {
Text alignment: left;
} .tdi_2_822.td-a-rec-img img {
Margin: 0 automatically 0 0;
} @media (Maximum width: 767 pixels){
.tdi_2_822.td-a-rec-img {
Text alignment: center;
}
}

  • EKS cluster: Use eksctl to set up an EKS cluster
  • AWS CLI

This is the name of the EKS cluster I will use in this tutorial.

$ eksctl get cluster
NAME			REGION
prod-eks-cluster	eu-west-1

Save the cluster name as a variable, which will be used in the remaining steps.

EKS_CLUSTER="prod-eks-cluster"

Use the EFS CSI driver to create a persistent volume

of Amazon Elastic File System The Container Storage Interface (CSI) driver implements CSI The container coordinator manages the specifications of the Amazon EFS file system life cycle.

Step 1: Create Amazon EFS file system

Amazon EFS CSI driver support Amazon EFS access point, They are application-specific entry points in the Amazon EFS file system, making it easier to share the file system among multiple Pods.

You can perform these operations from the Amazon console or terminal. I will use the AWS CLI interface in all operations.

Find the VPC ID of your Amazon EKS cluster:

EKS_CLUSTER="prod-eks-cluster"
EKS_VPC_ID=$(aws eks describe-cluster --name $EKS_CLUSTER --query "cluster.resourcesVpcConfig.vpcId" --output text)

Confirm a valid VPC ID.

$ echo $EKS_VPC_ID
vpc-019a6458a973ace2b

Find the CIDR range of the VPC of the cluster:

EKS_VPC_CIDR=$(aws ec2 describe-vpcs --vpc-ids $EKS_VPC_ID --query "Vpcs[].CidrBlock" --output text)

Confirm VPC CIDR:

$ echo $EKS_VPC_CIDR
192.168.0.0/16

Create a security group that allows inbound NFS traffic to the Amazon EFS mount point:

aws ec2 create-security-group --group-name efs-nfs-sg --description "Allow NFS traffic for EFS" --vpc-id $EKS_VPC_ID

Write down the security group ID. Mine is:

{
    "GroupId": "sg-0fac73a0d7d943862"
}
# You can check with
$ aws ec2 describe-security-groups --query "SecurityGroups[*].{Name:GroupName,ID:GroupId}"

Add the rule to your security group:

SG_ID="sg-0fac73a0d7d943862"
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port 2049 --cidr $EKS_VPC_CIDR

To view the changes to the security group, run the describe-security-groups command:

$ aws ec2 describe-security-groups --group-ids $SG_ID
{
    "SecurityGroups": [
        {
            "Description": "Allow NFS traffic for EFS",
            "GroupName": "efs-nfs-sg",
            "IpPermissions": [
                {
                    "FromPort": 2049,
                    "IpProtocol": "tcp",
                    "IpRanges": [
                        {
                            "CidrIp": "192.168.0.0/16"
                        }
                    ],
                    "Ipv6Ranges": [],
                    "PrefixListIds": [],
                    "ToPort": 2049,
                    "UserIdGroupPairs": []
                }
            ],
            "OwnerId": "253859766502",
            "GroupId": "sg-0fac73a0d7d943862",
            "IpPermissionsEgress": [
                {
                    "IpProtocol": "-1",
                    "IpRanges": [
                        {
                            "CidrIp": "0.0.0.0/0"
                        }
                    ],
                    "Ipv6Ranges": [],
                    "PrefixListIds": [],
                    "UserIdGroupPairs": []
                }
            ],
            "VpcId": "vpc-019a6458a973ace2b"
        }
    ]
}

Create an Amazon EFS file system for your Amazon EKS cluster:

# Not encrypted
$ aws efs create-file-system --region eu-west-1

# Encrypted EFS file system
$ aws efs create-file-system --encrypted --region eu-west-1

Note the file system ID:

{
    "OwnerId": "253759766542",
    "CreationToken": "c16c4603-c7ac-408f-ac4a-75a683ed2a29",
    "FileSystemId": "fs-22ac06e8",
    "FileSystemArn": "arn:aws:elasticfilesystem:eu-west-1:253759766542:file-system/fs-22ac06e8",
    "CreationTime": "2020-08-16T15:17:18+03:00",
    "LifeCycleState": "creating",
    "NumberOfMountTargets": 0,
    "SizeInBytes": {
        "Value": 0,
        "ValueInIA": 0,
        "ValueInStandard": 0
    },
    "PerformanceMode": "generalPurpose",
    "Encrypted": true,
    "KmsKeyId": "arn:aws:kms:eu-west-1:253759766542:key/6c9b725f-b86d-41c2-b804-1685ef43f620",
    "ThroughputMode": "bursting",
    "Tags": []
}

User interface view:

Step 2: Create EFS mount target

Get the subnet in the VPC running the EC2 instance. In my case, all EKS instances are running in a dedicated subnet.

EKS_VPC_ID=$(aws eks describe-cluster --name $EKS_CLUSTER --query "cluster.resourcesVpcConfig.vpcId" --output text)
aws ec2 describe-subnets --filter Name=vpc-id,Values=$EKS_VPC_ID --query 'Subnets[?MapPublicIpOnLaunch==`false`].SubnetId'

My output:

[
    "subnet-0977bbaf236bd952f",
    "subnet-0df8523ca39f63938",
    "subnet-0a4a22d25f36c4124"
]

Create a mount target.

# File system ID
EFS_ID="fs-22ac06e8"

# Create mount targets for the subnets - Three subnets in my case
for subnet in subnet-0977bbaf236bd952f subnet-0df8523ca39f63938 subnet-0a4a22d25f36c4124; do
  aws efs create-mount-target 
    --file-system-id $EFS_ID 
    --security-group  $SG_ID 
    --subnet-id $subnet 
    --region eu-west-1
done

Step 2: Use the EFS CSI driver

After creating the EFS file system and Mount Target, we can test the EFS CSI driver by creating a static persistent volume and declaring it through the test container.

Deploy the EFS CSI provisioner.

$ kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/dev/?ref=master"

daemonset.apps/efs-csi-node created
csidriver.storage.k8s.io/efs.csi.aws.com created

List the available CSI drivers:

$ kubectl get csidrivers.storage.k8s.io
NAME              CREATED AT
efs.csi.aws.com   2020-08-16T19:10:35Z

First obtain the EFS file system ID:

$ aws efs describe-file-systems --query "FileSystems[*].FileSystemId"
[
    "fs-22ac06e8"
]

Create a storage class:

kubectl apply -f - <

List the available storage categories:

$ kubectl get sc
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
efs-sc          efs.csi.aws.com         Delete          Immediate              false                  28s
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  4d21h

Create and modify the following manifest file to set the correct file system ID:

$ vim efs-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-22ac06e8 # Your EFS file system ID

Application file creation resources:

$ kubectl apply -f efs-pv.yml
persistentvolume/efs-pv created

$ kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
efs-pv   1Gi        RWO            Retain           Available           efs-sc                  19s

Create a claim resource:

kubectl apply -f - <

List the declaration to confirm that it has been created and the status is "bound":

$ kubectl get pvc
NAME        STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
efs-claim   Bound    efs-pv   1Gi        RWO            efs-sc         7s

Create test containers that use volume declarations.

kubectl apply -f - <centos
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim
EOF

Verify that the Pod is running:

$ kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
efs-app   1/1     Running   0          34s

Check the installation point of the inner container.

$ kubectl exec -ti efs-app -- bash
[[email protected] /]# df -hT
Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay   80G  3.4G   77G   5% /
tmpfs          tmpfs     64M     0   64M   0% /dev
tmpfs          tmpfs    1.9G     0  1.9G   0% /sys/fs/cgroup
127.0.0.1:/    nfs4     8.0E     0  8.0E   0% /data
/dev/nvme0n1p1 xfs       80G  3.4G   77G   5% /etc/hosts
shm            tmpfs     64M     0   64M   0% /dev/shm
tmpfs          tmpfs    1.9G   12K  1.9G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs          tmpfs    1.9G     0  1.9G   0% /proc/acpi
tmpfs          tmpfs    1.9G     0  1.9G   0% /sys/firmware

Write some test files /data The location where the EFS file system is mounted.

[[email protected] /]# touch /data/testfile1
[[email protected] /]# touch /data/testfile2
[[email protected] /]#
[[email protected] /]# ls /data/
out.txt  testfile1  testfile2
[[email protected] /]# exit
exit

Clean up your test data.

kubectl delete pod efs-app
kubectl delete pvc efs-claim
kubectl delete pv efs-pv

related articles:

Ceph persistent storage for Kubernetes using Cephfs

Use Ceph RBD to persistently store Kubernetes

Use Heketi and GlusterFS to configure Kubernetes dynamic volume configuration

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

[*]
/*Custom CSS */
.tdi_4_3cb.td-a-rec {
Text alignment: center;
} .tdi_4_3cb .td-element-style {
z index: -1;
} .tdi_4_3cb.td-a-rec-img {
Text alignment: left;
} .tdi_4_3cb.td-a-rec-img img {
Margin: 0 automatically 0 0;
} @media (Maximum width: 767 pixels){
.tdi_4_3cb.td-a-rec-img {
Text alignment: center;
}
}

Sidebar