Set up Ceph 15 (Octopus) storage cluster on CentOS 8

The
You can download this article in PDF format to support us through the following link.

Download the guide in PDF format

turn off
The

The
The

This tutorial will show you how to install and configure Ceph Storage Cluster on a CentOS 8 Linux server. Ceph is an open source, large-scale scalable simplified storage solution that implements a distributed object storage cluster and provides interfaces for object, block, and file-level storage. The Ceph 15 (Octopus) storage cluster we installed on CentOS 8 will use Ansible as an automated method of deployment.

Ceph cluster components

Basic composition of Ceph storage cluster

  • monitor: Ceph Monitor (ceph-mon) Maintain the mapping of cluster status, including monitor mapping, manager mapping, OSD mapping and CRUSH mapping
  • Ceph OSD: A Ceph OSD (object storage daemon, ceph-osd) Store data, handle data replication, restore, rebalance, and provide some monitoring information to Ceph monitors and managers by checking the heartbeat of other Ceph OSD daemons Usually at least 3 Ceph OSDs are required to achieve redundancy and high availability.
  • data sheet: Ceph metadata server (MDS, ceph-mds) Stands for Ceph file system to store metadata (ie Ceph block devices and Ceph object storage do not use MDS). The Ceph metadata server allows POSIX file system users to execute basic commands (for example,ls, find Etc.) without putting a huge burden on the Ceph storage cluster.
  • Cephalosporins manager: Ceph Manager daemon (ceph-mgr) Responsible for tracking runtime indicators and the current state of the Ceph cluster, including storage utilization, current performance indicators and system load

Our Ceph Storage Cluster installation on CentOS 8 is based on the following system design.

server nameCEPH componentsServer specifications
cephadminHead2GB memory, 1vcpus
cephmon01Ceph Monitor8GB RAM, 4vpcus
cephmon01Ceph MON, MGR, MDS8GB RAM, 4vpcus
cephmon01Ceph MON, MGR, MDS8GB RAM, 4vpcus
Cefosine 01Ceph OSD16GB RAM, 8vpcus
Cephalex 02Ceph OSD16GB RAM, 8vpcus
Cephalex 03Ceph OSD16GB RAM, 8vpcus

of cephadmin The nodes will be used to deploy Ceph storage clusters on CentOS 8.

Step 1: Prepare all nodes-ceph-ansible, OSD, MON, MGR, MDS

We need to prepare all nodes according to the following steps.

  • Set the correct host name on each server
  • Set the correct time and configure real-time NTP service
  • Add hostname with IP address to DNS server or update / etc / hosts on all servers

Examples / etc / hosts Content on each host.

sudo tee -a /etc/hosts<

After completing the above tasks, install the basic package:

sudo dnf update
sudo dnf install vim bash-completion tmux

Restart each server after the upgrade.

sudo dnf -y update && sudo reboot

Step 2: Prepare the Ceph management node

Log in to the management node:

$ ssh [email protected]

Add EPEL repository:

sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo dnf config-manager --set-enabled PowerTools

Install Git:

sudo yum install git vim bash-completion

Clone the Ceph Ansible repository:

git clone https://github.com/ceph/ceph-ansible.git

Choose the ceph-ansible branch you want to use. The command syntax is:

git checkout $branch

I will switch to stable-5.0 which supports Ceph octopus Version.

cd ceph-ansible
git checkout stable-5.0

Install Python pip.

sudo yum install python3-pip

Use pip and the provided requirements.txt to install Ansible and other required Python libraries:

sudo pip3 install -r requirements.txt

Make sure to add the / usr / local / bin path to PATH.

$ echo "PATH=$PATH:/usr/local/bin" >>~/.bashrc
$ source ~/.bashrc

Confirm that the Ansible version is installed.

$ ansible --version
ansible 2.9.7
  config file = /root/ceph-ansible/ansible.cfg
  configured module search path = ['/root/ceph-ansible/library']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.6.8 (default, Nov 21 2019, 19:31:34) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]

Copy the SSH public key to all nodes

Set up an SSH key pair on your Ceph management node and copy the public key to All storage nodes.

$ ssh-keygen

-- Copy pubkey, example:
for host in cephmon01 cephmon02 cephmon03 cephosd01 cephosd02 cephosd03; do
 ssh-copy-id [email protected]$host
done

Create ssh configuration files for all storage nodes on the Admin node.

# This is my ssh config file
$ vi ~/.ssh/config 
Host cephadmin
    Hostname 192.168.10.10
    User root
Host cephmon01
    Hostname 192.168.10.11
    User root
Host cephmon02
    Hostname 192.168.10.12
    User root
Host cephmon03
    Hostname 192.168.10.13
    User root
Host cephosd01
    Hostname 192.168.10.14
    User root
Host cephosd02
    Hostname 192.168.10.15
    User root
Host cephosd03
    Hostname 192.168.10.16
    User root
  • Replace the hostname value with IP address Node and user With the value of the remote user you want to install.

When not using root for SSH

For normal user installation, please enable remote user Nodes on all storage Execute sudo without password.

echo -e 'Defaults:user !requirettynusername ALL = (root) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/ceph
sudo chmod 440 /etc/sudoers.d/ceph

Where username Will be replaced with the username configured in ~ / .Ssh / config file.

Configure Ansible inventory and Playbook

Create a Ceph cluster group variable file on the management node

cd ceph-ansible
cp group_vars/all.yml.sample  group_vars/all.yml
vim group_vars/all.yml

Edit the file to configure your ceph cluster

ceph_release_num: 15
cluster: ceph

# Inventory host group variables
mon_group_name: mons
osd_group_name: osds
rgw_group_name: rgws
mds_group_name: mdss
nfs_group_name: nfss
rbdmirror_group_name: rbdmirrors
client_group_name: clients
iscsi_gw_group_name: iscsigws
mgr_group_name: mgrs
rgwloadbalancer_group_name: rgwloadbalancers
grafana_server_group_name: grafana-server

# Firewalld / NTP
configure_firewall: True
ntp_service_enabled: true
ntp_daemon_type: chronyd

# Ceph packages
ceph_origin: repository
ceph_repository: community
ceph_repository_type: cdn
ceph_stable_release: octopus

# Interface options
monitor_interface: eth0
radosgw_interface: eth0

# DASHBOARD
dashboard_enabled: True
dashboard_protocol: http
dashboard_admin_user: admin
dashboard_admin_password: [email protected]

grafana_admin_user: admin
grafana_admin_password: [email protected]

If the cluster and the public network have separate networks, define them accordingly.

public_network: "192.168.3.0/24"
cluster_network: "192.168.4.0/24"

Configure other parameters as needed.

Set up the OSD device.

I have three OSD nodes, and each node has a bare block device – / dev / sdb

$ lsblk 
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda       8:0    0 76.3G  0 disk 
├─sda1    8:1    0 76.2G  0 part /
├─sda14   8:14   0    1M  0 part 
└─sda15   8:15   0   64M  0 part /boot/efi
sdb       8:16   0   50G  0 disk 
sr0      11:0    1 1024M  0 rom  

List the OSD raw block devices to be used.

$ cp group_vars/osds.yml.sample group_vars/osds.yml
$ vim group_vars/osds.yml
copy_admin_key: true
devices:
  - /dev/sdb

Create a new ceph node ansible inventory:

vim hosts

Set up the inventory file correctly. The following is my inventory. The way to modify the inventory group is how you want to install services on the cluster nodes.

# Ceph admin user for SSH and Sudo
[all:vars]
ansible_ssh_user=root
ansible_become=true
ansible_become_method=sudo
ansible_become_user=root

# Ceph Monitor Nodes
[mons]
cephmon01
cephmon02
cephmon03

# MDS Nodes
[mdss]
cephmon01
cephmon02
cephmon03

# RGW
[rgws]
cephmon01
cephmon02
cephmon03

# Manager Daemon Nodes
[mgrs]
cephmon01
cephmon02
cephmon03

# set OSD (Object Storage Daemon) Node
[osds]
cephosd01
cephosd02
cephosd03

# Grafana server
[grafana-server]
cephosd01

Step 3: Deploy Ceph 15 (Octopus) cluster on CentOS 8

Create a Playbook file by copying the sample playbook to the root directory of the project site.yml.sample accessible by ceph

cp site.yml.sample site.yml 

Run the Playbook.

ansible-playbook -i hosts site.yml 

If the installation is successful, the health check should return "OK."

...
TASK [show ceph status for cluster ceph] ***************************************************************************************************************
Sunday 10 May 2020  20:12:33 +0200 (0:00:00.721)       0:09:00.180 ************ 
ok: [cephmon01 -> cephmon01] => 
  msg:
  - '  cluster:'
  - '    id:     b64fac77-df30-4def-8e3c-1935ef9f0ef3'
  - '    health: HEALTH_OK'
  - ' '
  - '  services:'
  - '    mon: 3 daemons, quorum ceph-mon-02,ceph-mon-03,ceph-mon-01 (age 6m)'
  - '    mgr: ceph-mon-03(active, since 38s), standbys: ceph-mon-02, ceph-mon-01'
  - '    mds: cephfs:1 {0=ceph-mon-02=up:active} 2 up:standby'
  - '    osd: 3 osds: 3 up (since 4m), 3 in (since 4m)'
  - '    rgw: 3 daemons active (ceph-mon-01.rgw0, ceph-mon-02.rgw0, ceph-mon-03.rgw0)'
  - ' '
  - '  task status:'
  - '    scrub status:'
  - '        mds.ceph-mon-02: idle'
  - ' '
  - '  data:'
  - '    pools:   7 pools, 132 pgs'
  - '    objects: 215 objects, 9.9 KiB'
  - '    usage:   3.0 GiB used, 147 GiB / 150 GiB avail'
  - '    pgs:     0.758% pgs not active'
  - '             131 active+clean'
  - '             1   peering'
  - ' '
  - '  io:'
  - '    client:   3.5 KiB/s rd, 402 B/s wr, 3 op/s rd, 0 op/s wr'
  - ' '
....

This is a screenshot of my installation output.

Step 4: Verify Ceph cluster installation on CentOS 8

Log in to one of the cluster nodes and perform some verification to confirm the successful installation of Ceph Storage Cluster on CentOS 8.

$ ssh [email protected]
# ceph -s
  cluster:
    id:     b64fac77-df30-4def-8e3c-1935ef9f0ef3
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-mon-02,ceph-mon-03,ceph-mon-01 (age 22m)
    mgr: ceph-mon-03(active, since 16m), standbys: ceph-mon-02, ceph-mon-01
    mds: cephfs:1 {0=ceph-mon-02=up:active} 2 up:standby
    osd: 3 osds: 3 up (since 20m), 3 in (since 20m)
    rgw: 3 daemons active (ceph-mon-01.rgw0, ceph-mon-02.rgw0, ceph-mon-03.rgw0)
 
  task status:
    scrub status:
        mds.ceph-mon-02: idle
 
  data:
    pools:   7 pools, 121 pgs
    objects: 215 objects, 11 KiB
    usage:   3.1 GiB used, 147 GiB / 150 GiB avail
    pgs:     121 active+clean

You can access the Ceph dashboard on the active MGR node.

Set up Ceph 15 (Octopus) storage cluster on CentOS 8

Log in with the credentials configured in group_vars / all.yml file. For me, these are:

dashboard_admin_user: admin
dashboard_admin_password: [email protected]

You can then create more users with different levels of access on the cluster.

Set up Ceph 15 (Octopus) storage cluster on CentOS 8

The Grafana dashboard can be accessed on the node you set up Grana Server The name of the group. The service should listen on the port 3000 default.

Set up Ceph 15 (Octopus) storage cluster on CentOS 8

Use the access credentials configured to access the management console.

grafana_admin_user: admin
grafana_admin_password: [email protected]

Day 2 operation

ceph-ansible provides a set of scripts in it infrastructure-playbooks Directory to perform some basic Day-2 operations.

reference:

Here are some more useful guides about Ceph:

Create a pool in the Ceph storage cluster

How to configure AWS S3 CLI for Ceph Object Gateway storage

Ceph permanent storage for Kubernetes using Cephfs

Kubernetes persistent storage using Ceph RBD

The
You can download this article in PDF format to support us through the following link.

Download the guide in PDF format

turn off
The

The
The

Sidebar