Configure RAID 0 | RAID 1 | RAID 10 on CentOS 8

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

RAID stands for Redundant Array of Independent Disks. Its basic development purpose is to allow multiple disks (such as HDD, SSD and NVMe) to be combined into an array for redundancy. This array of devices appears on the computer as a single logical storage unit or drive. Redundancy cannot be achieved by inserting a huge disk drive in the project, because it is almost impossible to recover data in the event of a disaster. In RAID, even if the array consists of multiple disks, the computer “sees” it as a single drive or a single logical storage unit, which is remarkable.

Definition of terms

  • Hot spare parts: – If your disk is not used in the RAID array and is on standby when a disk failure occurs, you have a hot spare disk. The data from the failed disk will be automatically migrated to the spare disk.
  • Mirror: – As you can guess, mirroring is just copying the same data to another disk. This feature makes the idea of ​​data backup possible.
  • Slitting: -A function that allows data to be randomly written to all available disks. It’s like sharing data between all disks, so all disks are filled.
  • parity: -Technology to regenerate lost data from the stored parity information.

Using technologies such as disk striping (RAID level 0), disk mirroring (RAID level 1) and disk striping with parity (RAID levels 5, 4 and 6), RAID can achieve redundancy and lower Delay, increased bandwidth and maximized performance. The ability to recover from a hard drive crash.

The main reasons you should consider deploying RAID in your project include:

  • Achieve better speed
  • Use a single virtual disk to increase storage capacity
  • Minimize data loss caused by disk failure. Depending on your RAID type, you will be able to achieve redundancy, which will save you space in the event of data loss in the future.

There are three forms of RAID technology: firmware RAID, hardware RAID and software RAID. Hardware RAID handles its array independently of the host, and still provides the host with the function of one disk per RAID array. It uses a hardware RAID controller card, which handles RAID tasks transparently to the operating system. On the other hand, software RAID implements various RAID levels in the kernel disk (block device) code and provides the cheapest solution because it does not require expensive disk controller cards or hot-swappable chassis. The current era has faster CPUs, so software RAID is usually better than hardware RAID.

Basic functions of software RAID. Source (access.redhat.com)

  • Porting arrays between Linux machines without refactoring
  • Use idle system resources for background array reconstruction
  • Hot-plug drive support
  • Automatic CPU detection to take advantage of certain CPU features, such as streaming SIMD support
  • Automatically correct bad sectors on the disk in the array
  • Regularly check the consistency of RAID data to ensure the health of the array
  • Actively monitor the array and send email alerts to designated email addresses when important events occur
  • Write intent bitmap, by allowing the kernel to know exactly which parts of the disk need to be resynchronized without having to resynchronize the entire array, greatly improving the speed of resynchronization events

Set up RAID on CentOS 8

With a brief introduction, let us enter the crux of the problem and set up various RAID levels in CentOS 8. Before proceeding, we need the mdadm tool, which will help configure various RAID levels.

sudo dnf -y update
sudo dnf -y install mdadm

Configure RAID level 0 on CentOS 8

As mentioned earlier, RAID 0 provides striping without parity and requires at least two hard drives. Compared with other speeds, its speed is well evaluated because it does not store any parity data, nor does it perform read and write operations at the same time.

Let’s view the disk on the server:

lsblk

NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                   8:0    0  128G  0 disk
├─sda1                8:1    0    1G  0 part /boot
└─sda2                8:2    0  127G  0 part
  ├─cl_centos8-root 253:0    0   50G  0 lvm  /
  ├─cl_centos8-swap 253:1    0    2G  0 lvm  [SWAP]
  └─cl_centos8-home 253:2    0   75G  0 lvm  /home
sdb                   8:16   0    1G  0 disk
sdc                   8:32   0    1G  0 disk
sdd                   8:48   0    1G  0 disk

As shown above, the server has three additional raw disks (sdb, sdc and sdd). We will first erase the disks, then partition them, and then create RAID on top of them.

for i in sdb sdc sdd; do
  sudo wipefs -a /dev/$i
  sudo mdadm --zero-superblock /dev/$i
done

Create a partition on the disk and set the RAID flag.

for i in sdb sdc sdd; do
  sudo parted --script /dev/$i "mklabel gpt"
  sudo parted --script /dev/$i "mkpart primary 0% 100%"
  sudo parted --script /dev/$i "set 1 raid on"
done

You should see that new partitions (sdb1, sdc1, sdd1) have been created:

lsblk

NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                   8:0    0  128G  0 disk
├─sda1                8:1    0    1G  0 part /boot
└─sda2                8:2    0  127G  0 part
  ├─cl_centos8-root 253:0    0   50G  0 lvm  /
  ├─cl_centos8-swap 253:1    0    2G  0 lvm  [SWAP]
  └─cl_centos8-home 253:2    0   75G  0 lvm  /home
sdb                   8:16   0    1G  0 disk
└─sdb1                8:17   0 1022M  0 part
sdc                   8:32   0    1G  0 disk
└─sdc1                8:33   0 1022M  0 part
sdd                   8:48   0    1G  0 disk
└─sdd1                8:49   0 1022M  0 part

After the partition is ready, continue to create a RAID 0 device. Level striping is the same as RAID 0 because it only provides data striping.

sudo mdadm --create /dev/md0 --level=stripe --raid-devices=3 /dev/sd[b-d]1

Use any of the following commands to find out the status of your RAID device:

cat /proc/mdstat

Personalities : [raid0]
md0 : active raid0 sdd1[2] sdc1[1] sdb1[0]
      3133440 blocks super 1.2 512k chunks

unused devices: 

Either

sudo mdadm --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Wed Aug 26 21:20:57 2020
        Raid Level : raid0
        Array Size : 3133440 (2.99 GiB 3.21 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Wed Aug 26 21:20:57 2020
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : -unknown-
        Chunk Size : 512K

Consistency Policy : none

              Name : centos8.localdomain:0  (local to host centos8.localdomain)
              UUID : 2824d400:1967473c:dfa0938f:fbb383ae
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1

If everything looks good, create a file system of your choice on the new RAID device.

sudo mkfs.ext4 /dev/md0

Next, we need to mount the new device on the file system so that it can start saving files and directories. Create a new mount point:

sudo mkdir /mnt/raid0

Configure mount in /etc/fstab:

echo "/dev/md0 /mnt/raid0 ext4 defaults 0 0" | sudo tee -a /etc/fstab

If you are not sure about the file system type, issue the following command and change ext4 to the TYPE that will be displayed.

sudo blkid /dev/md0
/dev/md0: UUID="e6fe86e5-d241-4208-ab94-3ca79e59c8b6" TYPE="ext4"

Confirm that it can be installed correctly:

$ sudo mount -a
$ df -hT

Filesystem                  Type      Size  Used Avail Use% Mounted on
devtmpfs                    devtmpfs  865M     0  865M   0% /dev
tmpfs                       tmpfs     882M     0  882M   0% /dev/shm
tmpfs                       tmpfs     882M   17M  865M   2% /run
tmpfs                       tmpfs     882M     0  882M   0% /sys/fs/cgroup
/dev/mapper/cl_centos8-root xfs        50G  2.1G   48G   5% /
/dev/sda1                   ext4      976M  139M  770M  16% /boot
/dev/mapper/cl_centos8-home xfs        75G  568M   75G   1% /home
tmpfs                       tmpfs     177M     0  177M   0% /run/user/1000
/dev/md0                    ext4      2.9G  9.0M  2.8G   1% /mnt/raid0    ##Our New Device.

Configure RAID level 1 on CentOS 8

RAID 1 provides disk mirroring or parity without splitting. It just writes all data to two disks, so if one disk fails or ejects, all data will be available on the other disk. Since RAID 1 is written on two disks, RAID 1 requires dual hard disks. Therefore, if you want to use 2 disks, you must install 4 disks for setup.

Before we start, let us clear all disks before starting the RAID configuration to make sure we start with clean disks.

for i in sdb sdc sdd; do
  sudo wipefs -a /dev/$i
  sudo mdadm --zero-superblock /dev/$i
done

Create a partition on the disk and set the RAID flag.

for i in sdb sdc sdd; do
  sudo parted --script /dev/$i "mklabel gpt"
  sudo parted --script /dev/$i "mkpart primary 0% 100%"
  sudo parted --script /dev/$i "set 1 raid on"
done

Create a RAID 1 device:

sudo mdadm --create /dev/md1 --level=raid1 --raid-devices=2 /dev/sd[b-c]1 --spare-devices=1 /dev/sdd1

Check the status of the new array:

 sudo mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Aug 26 21:32:52 2020
        Raid Level : raid1
        Array Size : 1045504 (1021.00 MiB 1070.60 MB)
     Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Wed Aug 26 21:33:02 2020
             State : clean
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : centos8.localdomain:1  (local to host centos8.localdomain)
              UUID : 9ca1da1d:a0c0a26b:6dd27959:a84dec0e
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

       2       8       49        -      spare   /dev/sdd1

After the RAID device is ready, if there is no file system in it, we will not be able to use it. To solve this problem, create the file system you need. An example is shown below, where xfs is being set up.

sudo mkfs.xfs /dev/md1

After that, create the mount point where the device will be installed:

sudo mkdir /mnt/raid1

Again, configure the installation in /etc/fstab:

echo "/dev/md1 /mnt/raid1 xfs defaults 0 0" | sudo tee -a /etc/fstab

Confirm that it can be installed correctly:

$ sudo mount -a
$ df -hT

Filesystem                  Type      Size  Used Avail Use% Mounted on
devtmpfs                    devtmpfs  865M     0  865M   0% /dev
tmpfs                       tmpfs     882M     0  882M   0% /dev/shm
tmpfs                       tmpfs     882M   17M  865M   2% /run
tmpfs                       tmpfs     882M     0  882M   0% /sys/fs/cgroup
/dev/mapper/cl_centos8-root xfs        50G  2.1G   48G   5% /
/dev/sda1                   ext4      976M  139M  770M  16% /boot
/dev/mapper/cl_centos8-home xfs        75G  568M   75G   1% /home
tmpfs                       tmpfs     177M     0  177M   0% /run/user/1000
/dev/md1                    xfs      1016M   40M  977M   4% /mnt/raid1

Configure RAID level 10 on CentOS 8

RAID 10 combines disk mirroring (write to two disks at a time) and disk striping to protect data. RAID 10 requires at least 4 disks to stripe data across mirroring. With this configuration, as long as one disk in each mirrored pair is working, data can be retrieved.

Like the previous RAID level that has been completed, please erase all original disks first.

for i in sdb sdc sdd sde; do
  sudo wipefs -a /dev/$i
  sudo mdadm --zero-superblock /dev/$i
done

Create a partition on the disk and set the RAID flag.

for i in sdb sdc sdd sde; do
  sudo parted --script /dev/$i "mklabel gpt"
  sudo parted --script /dev/$i "mkpart primary 0% 100%"
  sudo parted --script /dev/$i "set 1 raid on"
done

Then continue to create a RAID 10 device and check its status:

sudo mdadm --create /dev/md10 --level=10 --raid-devices=4 dev/sd[b-e]1
sudo mdadm -–query --detail /dev/md10

After setting up the RAID device, create the file system required for your specific needs. An example is shown below where xfs is being set up.

sudo mkfs.xfs /dev/md10

After that, create the mount point where the device will be installed:

sudo mkdir /mnt/raid10

Configure mount in /etc/fstab:

echo "/dev/md10 /mnt/raid10 xfs defaults 0 0" | sudo tee -a /etc/fstab

Confirm that it can be installed correctly:

$ sudo mount -a
$ df -hT

Stop and remove the RAID array

If you want to remove the RAID device from the system, just unmount the mount point, stop it and delete it with the following command. Remember to replace /mnt/raid0 with the mount point and /dev/md0 with the RAID device.

sudo umount /mnt/raid0
sudo mdadm --stop /dev/md0
sudo mdadm --remove /dev/md0

Celebration endnote

RAID is excellent due to its versatility and easy-to-set-up characteristics. As you can see, you only need to execute a few commands to configure RAID and your array will be healthy again. According to business needs, you can implement high-level backups to back up in the event of a disaster.

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

Sidebar