Back up Etcd data on OpenShift 4.x to AWS S3 bucket

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatshut down

Do you have a running OpenShift cluster to power your production microservices and worry about etcd data backup? In this guide, we show you how to easily backup etcd and push the backup data to AWS S3 object storage. etcd is the key-value store of the OpenShift container platform, which can persist the state of all resource objects.

In any OpenShift cluster management, it is recommended that you regularly back up the cluster’s etcd data and store it in a safe location, which is a good recommended practice. The ideal location for data storage is outside the OpenShift Container Platform environment. This can be an NFS server share, or a secondary server in the infrastructure or cloud environment.

Another suggestion is to perform etcd backups during off-peak usage hours, as this operation is inherently blocking. Make sure to perform etcd backup operations after any OpenShift cluster upgrade. The importance of this is that during cluster restore, etcd backups obtained from the same z-stream version must be used. For example, the OpenShift Container Platform 4.6.3 cluster must use the etcd backup obtained from 4.6.3.

Step 1: Log in to a master node in the cluster

The etcd cluster backup must be executed on the last call of the backup script on the master host. Don’t make backups for every master host.

Log in to a master node via SSH or a debugging session:

# SSH Access
$ ssh [email protected]<master_node_ip_or_dns_name>

# Debug session
$ oc debug node/<node_name>

For the debugging session, you need to change the root directory to the host:

sh-4.6# chroot /host

If cluster-wide agents are enabled, make sure to export NO_PROXY, HTTP_PROXY, with HTTPS_PROXY Environment variables.

Step 2: Perform etcd backup on OpenShift 4.x

Use OpenShift cluster access as a user cluster-admin A role is required to perform this operation.

Before proceeding, please check to make sure that the proxy is enabled:

$ oc get proxy cluster -o yaml

If the proxy is enabled, the httpProxy, httpsProxy and noProxy fields will be set to values.

Run the script to start the etcd backup process. You should go through the path where the backup is saved.

$ mkdir /home/core/etcd_backups
$ sudo /usr/local/bin/ /home/core/etcd_backups

This is the output of my command execution:

etcdctl version: 3.3.18
API version: 3.3
found latest kube-apiserver-pod: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-115
found latest kube-controller-manager-pod: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-24
found latest kube-scheduler-pod: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-26
found latest etcd-pod: /etc/kubernetes/static-pod-resources/etcd-pod-11
Snapshot saved at /home/core/etcd_backups/snapshot_2021-03-16_134036.db
snapshot db and kube resources are successfully saved to /home/core/etcd_backups

List the files in the backup directory:

$ ls -1 /home/core/etcd_backups/

$ du -sh /home/core/etcd_backups/*
1.5G	/home/core/etcd_backups/snapshot_2021-03-16_134036.db
76K	/home/core/etcd_backups/static_kuberesources_2021-03-16_134036.tar.gz

There will be two files in the backup:

  • snapshot_<datetimestamp>.db: This file is an etcd snapshot.
  • static_kuberesources_<datetimestamp>.tar.gz: This file contains static Pod resources. If etcd encryption is enabled, it will also contain the encryption key for etcd snapshots.

Step 3: Push the backup to AWS S3 (from Bastion Server)

Log in from Bastion Server and copy the backup file.

scp -r [email protected]:/home/core/etcd_backups ~/

Install the AWS CLI tool:

curl "" -o ""

Install the decompression tool:

sudo yum -y install unzip

Unzip the downloaded file:


Install AWS CLI:

$ sudo ./aws/install
You can now run: /usr/local/bin/aws --version

Confirm the installation by checking the version:

$ aws --version
aws-cli/2.1.30 Python/3.8.8 Linux/3.10.0-957.el7.x86_64 exe/x86_64.rhel.7 prompt/off

Create an OpenShift Backups bucket:

$ aws s3 mb s3://openshiftbackups
make_bucket: openshiftbackups

Create an IAM user:

$ aws iam create-user --user-name backupsonly

Create an AWS policy user for backup-this user can only write to S3:

cat >aws-s3-uploads-policy.json<<EOF
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "*"

Application policy:

aws iam create-policy --policy-name upload-only-policy --policy-document file://aws-s3-uploads-policy.json

Assign AWS policies to IAM users:

aws iam attach-user-policy --policy-arn arn:aws:iam::<accountid>:policy/upload-only-policy --user-name backupsonly

Now you can create an access key for IAM users for testing:

$ aws iam create-access-key --user-name backupsonly
    "AccessKey": {
        "UserName": "backupsonly",
        "AccessKeyId": "AKIATWFKCYAHF74SCFEP",
        "Status": "Active",
        "SecretAccessKey": "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",
        "CreateDate": "2021-03-16T12:14:39+00:00"

Note down the access and key ID and use it in the configuration:

$ aws configure # On OCP Bastion server


  • AWS access key ID
  • AWS secret access key
  • Default area

Try to upload the file to the S3 bucket:

$ aws s3 cp etcd_backups/ s3://openshiftbackups/etcd --recursive
upload: etcd_backups/static_kuberesources_2021-03-16_134036.tar.gz to s3://openshiftbackups/etcd/static_kuberesources_2021-03-16_134036.tar.gz
upload: etcd_backups/snapshot_2021-03-16_134036.db to s3://openshiftbackups/etcd/snapshot_2021-03-16_134036.db


$ aws s3 ls s3://openshiftbackups/etcd/
2021-03-16 16:00:59 1549340704 snapshot_2021-03-16_134036.db
2021-03-16 16:00:59      77300 static_kuberesources_2021-03-16_134036.tar.gz

Step 4: Automatic backup to AWS S3 (from Bastion Server)

We can execute a script that will do the following:

  1. Log in to the master node from the bastion
  2. Start etcd backup
  3. Copy the backup data from the master node to the bastion server
  4. Delete the backup data on the primary node
  5. Copy the backup data to the S3 bucket
  6. Delete local data after successfully uploading to S3

Create a script file on the bastion server:

$ vim

This is a script that can be further modified for more advanced use cases.


# Create backups directory if doesn't exist
[ -d ${BACKUPS_DIR} ] && echo "Directory Exists" || mkdir ${BACKUPS_DIR}

# Login and run backup
ssh ${USERNAME}@${MASTER_NAME} 'mkdir /home/core/etcd_backups' 2>/dev/null
ssh ${USERNAME}@${MASTER_NAME} 'sudo /usr/local/bin/ /home/core/etcd_backups'
scp -r ${USERNAME}@${MASTER_NAME}:/home/core/etcd_backups/*  ${BACKUPS_DIR}/

# clean etcd backups directory on the master node
if [ $RESULT -eq 0 ]; then
    ssh ${USERNAME}@${MASTER_NAME} 'rm -rf /home/core/etcd_backups/*'

# Backup to aws s3
aws s3 cp ${BACKUPS_DIR}/ s3://${S3_BUCKET} --recursive
# List bucket contents
aws s3 ls s3://${S3_BUCKET}/

# Clean backups older than 1 day
#find ${BACKUPS_DIR}/ -mtime +1 -exec rm {} ;
find ${BACKUPS_DIR}/ -type f -mtime +1 -delete

Use Cron Job running at 3 AM:

$ crontab -e
0 3 * * * /path/to/

in conclusion

In this article, we studied how to backup OpenShift etcd and push data to S3 bucket. In our next guide, we can discuss how to perform a restore from a backup.

More guides about OpenShift clusters:

Install Red Hat Advanced Cluster Management on OpenShift 4.x

How to change pids_limit value in OpenShift 4.x

How to deploy Ubuntu Pod in Kubernetes | OpenShift

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatshut down

Related Posts