Install Ceph 15 (Octopus) storage cluster on Ubuntu 20.04

The
You can download this article in PDF format to support us through the following link.

Download the guide in PDF format

turn off
The

The
The

I have been planning to write an article about installing Ceph Storage Cluster on an Ubuntu 20.04 Linux server. Today is the delivery date. Ceph is a software-defined storage solution designed to build a distributed storage cluster on commodity hardware. The requirements for building a Ceph Storage Cluster on Ubuntu 20.04 will largely depend on the required use cases.

This setting is not suitable for running mission-critical write applications. For such requirements, you may need to consult the official project documentation, especially in terms of network and storage hardware. The following are the standard Ceph components that will be configured in this installation guide:

  • Ceph MON-monitoring server
  • Ceph MDS-metadata server
  • Ceph MGR – Ceph Manager daemon
  • Ceph OSD-object storage daemon

Install Ceph Storage Cluster on Ubuntu 20.04

Before deploying Ceph Storage Cluster on Ubuntu 20.04 Linux server, you need to prepare the required server. Below is a picture of my server ready to install.

As shown in the figure, my laboratory has the following server name and IP address.

Server host nameIP address of the serverCeph componentsServer specifications
ceph-mon-01172.16.20.10Ceph MON, MGR, MDS8GB RAM, 4vpcus
ceph-mon-02172.16.20.11Ceph MON, MGR, MDS8GB RAM, 4vpcus
ceph-mon-03172.16.20.12Ceph MON, MGR, MDS8GB RAM, 4vpcus
ceph-osd-01172.16.20.13Ceph OSD16GB RAM, 8vpcus
ceph-osd-02172.16.20.14Ceph OSD16GB RAM, 8vpcus
ceph-osd-03172.16.20.15Ceph OSD16GB RAM, 8vpcus

Step 1: Prepare the first Monitor node

The ceph component used for deployment is Cephadm. Cephadm uses the manager daemon to connect to the host via SSH to deploy and manage the Ceph cluster to add, delete, or update the Ceph daemon container.

Log in to your first Monitor node:

$ ssh [email protected] 
Warning: Permanently added 'ceph-mon-01,172.16.20.10' (ECDSA) to the list of known hosts.
Enter passphrase for key '/var/home/jkmutai/.ssh/id_rsa': 
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

Last login: Tue Jun  2 20:36:36 2020 from 172.16.20.10
[email protected]:~# 

Updates /etc/hosts File, which contains entries for all IP addresses and host names.

# vim /etc/hosts

127.0.0.1 localhost

# Ceph nodes
172.16.20.10  ceph-mon-01
172.16.20.11  ceph-mon-02
172.16.20.12  ceph-mon-03
172.16.20.13  ceph-osd-01
172.16.20.14  ceph-osd-02
172.16.20.15  ceph-osd-03

Update and upgrade the operating system:

sudo apt update && sudo apt -y upgrade
sudo systemctl reboot

Install Ansible and other basic utilities:

sudo apt update
sudo apt -y install software-properties-common git curl vim bash-completion ansible

Confirm that Ansible is installed.

$ ansible --version
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]

Make sure to add the /usr/local/bin path to PATH.

echo "PATH=$PATH:/usr/local/bin" >>~/.bashrc
source ~/.bashrc

Check your current path:

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin

Generate SSH key:

$ ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:3gGoZCVsA6jbnBuMIpnJilCiblaM9qc5Xk38V7lfJ6U [email protected]
The key's randomart image is:
+---[RSA 4096]----+
| ..o. . |
|. +o . |
|. .o.. . |
|o .o .. . . |
|o%o.. oS . o .|
|@+*o o… .. .o |
|O oo . ….. .E o|
|o+.oo. . ..o|
|o .++ . |
+----[SHA256]-----+

Install Cephadm:

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
sudo mv cephadm  /usr/local/bin/

Confirm that cephadm can be used locally:

$ cephadm --help

Step 2: Update all Ceph nodes and push the ssh public key

After configuring the first Mon node, create an ansible playbook to update all nodes, and press the ssh public key and update /etc/hosts The file is on all nodes.

cd ~/
vim prepare-ceph-nodes.yml

Modify the following to set the correct Time zone And add it to the file.

---
- name: Prepare ceph nodes
  hosts: ceph_nodes
  become: yes
  become_method: sudo
  vars:
    ceph_admin_user: cephadmin
  tasks:
    - name: Set timezone
      timezone:
        name: Africa/Nairobi

    - name: Update system
      apt:
        name: "*"
        state: latest
        update_cache: yes

    - name: Install common packages
      apt:
        name: [vim,git,bash-completion,wget,curl,chrony]
        state: present
        update_cache: yes

    - name: Set authorized key taken from file to root user
      authorized_key:
        user: root
        state: present
        key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
   
    - name: Install Docker
      shell: |
        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" > /etc/apt/sources.list.d/docker-ce.list
        apt update
        apt install -qq -y docker-ce docker-ce-cli containerd.io

    - name: Reboot server after update and configs
      reboot:

Create an inventory file.

$ vim hosts
[ceph_nodes]
ceph-mon-01
ceph-mon-02
ceph-mon-03
ceph-osd-01
ceph-osd-02
ceph-osd-03

If you use one, save the key password.

$ eval `ssh-agent -s` && ssh-add ~/.ssh/id_rsa_jmutai 
Agent pid 3275
Enter passphrase for /root/.ssh/id_rsa_jmutai: 
Identity added: /root/.ssh/id_rsa_jkmutai (/root/.ssh/id_rsa_jmutai)

Configure ssh:

tee -a ~/.ssh/config<

Executive script:

# As root user with  default ssh key:
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user root

# As root user with password:
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user root --ask-pass

# As sudo user with password - replace ubuntu with correct username
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-pass --ask-become-pass

# As sudo user with ssh key and sudo password - replace ubuntu with correct username
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-become-pass

# As sudo user with ssh key and passwordless sudo - replace ubuntu with correct username
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-become-pass

# As sudo or root user with custom key
$ ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key /path/to/private/key 

In my case, I will run:

$ ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key ~/.ssh/id_rsa_jkmutai

Execution output:

 ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key ~/.ssh/id_rsa_jmutai 

PLAY [Prepare ceph nodes] ******************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************
ok: [ceph-mon-03]
ok: [ceph-mon-02]
ok: [ceph-mon-01]
ok: [ceph-osd-01]
ok: [ceph-osd-02]
ok: [ceph-osd-03]

TASK [Update system] ***********************************************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-mon-03]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-osd-03]

TASK [Install common packages] *************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Add ceph admin user] *****************************************************************************************************************************
changed: [ceph-osd-02]
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-mon-03]
changed: [ceph-osd-01]
changed: [ceph-osd-03]

TASK [Create sudo file] ********************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Give ceph admin user passwordless sudo] **********************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Set authorized key taken from file to ceph admin] ************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-02]
changed: [ceph-mon-02]
changed: [ceph-osd-03]

TASK [Set authorized key taken from file to root user] *************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-mon-03]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-osd-03]

TASK [Install Docker] **********************************************************************************************************************************
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Reboot server after update and configs] **********************************************************************************************************
changed: [ceph-osd-01]
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-mon-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

PLAY RECAP *********************************************************************************************************************************************
ceph-mon-01                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-02                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-03                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-01                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-02                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-03                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Test ssh as the Ceph administrator user created on the node:

$ ssh [email protected]2
Warning: Permanently added 'ceph-mon-02,172.16.20.11' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-28-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

[email protected]:~$ sudo su -
[email protected]:~# logout[email protected]:~$ exit
logout
Connection to ceph-mon-01 closed.

Configure /etc/hosts

If you have not configured a valid DNS for the host name on all cluster servers, update /etc/hosts on all nodes.

This is the script to be modified:

$ vim update-hosts.yml
---
- name: Prepare ceph nodes
  hosts: ceph_nodes
  become: yes
  become_method: sudo
  tasks:
    - name: Clean /etc/hosts file
      copy:
        content: ""
        dest: /etc/hosts

    - name: Update /etc/hosts file
      blockinfile:
        path: /etc/hosts
        block: |
           127.0.0.1     localhost
           172.16.20.10  ceph-mon-01
           172.16.20.11  ceph-mon-02
           172.16.20.12  ceph-mon-03
           172.16.20.13  ceph-osd-01
           172.16.20.14  ceph-osd-02
           172.16.20.15  ceph-osd-03

Running script:

$ ansible-playbook -i hosts update-hosts.yml --private-key ~/.ssh/id_rsa_jmutai 

PLAY [Prepare ceph nodes] ******************************************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************************************************
ok: [ceph-mon-01]
ok: [ceph-osd-02]
ok: [ceph-mon-03]
ok: [ceph-mon-02]
ok: [ceph-osd-01]
ok: [ceph-osd-03]

TASK [Clean /etc/hosts file] ***************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

TASK [Update /etc/hosts file] **************************************************************************************************************************
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-mon-03]
changed: [ceph-osd-03]

PLAY RECAP *********************************************************************************************************************************************
ceph-mon-01                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-02                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-03                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-01                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-02                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-03                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

confirm:

$ ssh [email protected]
$ cat /etc/hosts
# BEGIN ANSIBLE MANAGED BLOCK
127.0.0.1      localhost
172.16.20.10   ceph-mon-01
172.16.20.11   ceph-mon-02
172.16.20.12   ceph-mon-03
172.16.20.13   ceph-osd-01
172.16.20.14   ceph-osd-02
172.16.20.15   ceph-osd-03
# END ANSIBLE MANAGED BLOCK

Step 3: Deploy Ceph 15 (Octopus) storage cluster on Ubuntu 20.04

To boot a new Ceph cluster on Ubuntu 20.04, you need the first monitor address – IP or hostname.

sudo mkdir -p /etc/ceph
cephadm bootstrap 
  --mon-ip ceph-mon-01 
  --initial-dashboard-user admin 
  --initial-dashboard-password [email protected]

Execution output:

INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit chrony.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit chrony.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 8dbf2eda-a513-11ea-a3c1-a534e03850ee
INFO:cephadm:Verifying IP 172.16.20.10 port 3300 ...
INFO:cephadm:Verifying IP 172.16.20.10 port 6789 ...
INFO:cephadm:Mon IP 172.16.20.10 is in CIDR network 172.31.1.1
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:mon is available
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:mgr not available, waiting (4/10)...
INFO:cephadm:mgr is available
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Mgr epoch 5 is available
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to [email protected]'s authorized_keys...
INFO:cephadm:Adding host ceph-mon-01...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Mgr epoch 13 is available
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

	     URL: https://ceph-mon-01:8443/
	    User: admin
	Password: [email protected]

INFO:cephadm:You can access the Ceph CLI with:

	sudo /usr/local/bin/cephadm shell --fsid 8dbf2eda-a513-11ea-a3c1-a534e03850ee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

	ceph telemetry on

For more information see:

	https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

Install the Ceph tool.

cephadm add-repo --release octopus
cephadm install ceph-common

If you have other monitors, please add them.

--- Copy Ceph SSH key ---
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]

--- Label the nodes with mon ---
ceph orch host label add ceph-mon-01 mon
ceph orch host label add ceph-mon-02 mon
ceph orch host label add ceph-mon-03 mon

--- Add nodes to the cluster ---
ceph orch host add ceph-mon-02
ceph orch host add ceph-mon-03

--- Apply configs ---
ceph orch apply mon ceph-mon-02
ceph orch apply mon ceph-mon-03

View the list of hosts and tags.

# ceph orch host ls

HOST         ADDR         LABELS  STATUS  
ceph-mon-01  ceph-mon-01  mon             
ceph-mon-02  ceph-mon-02  mon             
ceph-mon-03  ceph-mon-03  mon

Running container:

# docker ps
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES
7d666ae63232        prom/alertmanager          "/bin/alertmanager -…"   3 minutes ago       Up 3 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-alertmanager.ceph-mon-01
4e7ccde697c7        prom/prometheus:latest     "/bin/prometheus --c…"   3 minutes ago       Up 3 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-prometheus.ceph-mon-01
9fe169a3f2dc        ceph/ceph-grafana:latest   "/bin/sh -c 'grafana…"   8 minutes ago       Up 8 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-grafana.ceph-mon-01
c8e99deb55a4        prom/node-exporter         "/bin/node_exporter …"   8 minutes ago       Up 8 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-node-exporter.ceph-mon-01
277f0ef7dd9d        ceph/ceph:v15              "/usr/bin/ceph-crash…"   9 minutes ago       Up 9 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-crash.ceph-mon-01
9de7a86857aa        ceph/ceph:v15              "/usr/bin/ceph-mgr -…"   10 minutes ago      Up 10 minutes                           ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-mgr.ceph-mon-01.qhokxo
d116bc14109c        ceph/ceph:v15              "/usr/bin/ceph-mon -…"   10 minutes ago      Up 10 minutes                           ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-mon.ceph-mon-01

Step 4: Deploy Ceph OSD

Install the cluster's public SSH key in the authorized_keys file of the root user of the new OSD node:

ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]

Tell Ceph that the new node is part of the cluster:

--- Add hosts to cluster ---
ceph orch host add ceph-osd-01
ceph orch host add ceph-osd-02
ceph orch host add ceph-osd-03

--- Give new nodes labels ---

ceph orch host label add  ceph-osd-01 osd
ceph orch host label add  ceph-osd-02 osd
ceph orch host label add  ceph-osd-03 osd

View all devices on the storage node:

# ceph orch device ls
HOST         PATH      TYPE   SIZE  DEVICE                           AVAIL  REJECT REASONS  
ceph-mon-01  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-mon-02  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-mon-03  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-01  /dev/sdb  hdd   50.0G  HC_Volume_5680482                True                   
ceph-osd-01  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-02  /dev/sdb  hdd   50.0G  HC_Volume_5680484                True                   
ceph-osd-02  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-03  /dev/sdb  hdd   50.0G  HC_Volume_5680483                True                   
ceph-osd-03  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked    

If all the following conditions are met, the storage device is considered available:

  • The device must not be partitioned.
  • The device must not have any LVM status.
  • Do not install equipment.
  • The device must not contain a file system.
  • The device must not contain Ceph BlueStore OSD.
  • The device must be larger than 5 GB.

Tell Ceph to use any available and unused storage devices:

# ceph orch daemon add osd ceph-osd-01:/dev/sdb
Created osd(s) 0 on host 'ceph-osd-01'

# ceph orch daemon add osd ceph-osd-02:/dev/sdb
Created osd(s) 1 on host 'ceph-osd-02'

# ceph orch daemon add osd ceph-osd-03:/dev/sdb
Created osd(s) 1 on host 'ceph-osd-03'

Check the ceph status:

# ceph -s
  cluster:
    id:     8dbf2eda-a513-11ea-a3c1-a534e03850ee
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-mon-01 (age 23m)
    mgr: ceph-mon-01.qhokxo(active, since 22m), standbys: ceph-mon-03.rhhvzc
    osd: 3 osds: 3 up (since 36s), 3 in (since 36s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 1 objects, 0 B
    usage:   3.0 GiB used, 147 GiB / 150 GiB avail
    pgs:     1 active+clean

Step 5: Access the Ceph dashboard

The Ceph dashboard is now available at the address of the active MGR server.

# ceph -s

For this it will be:

URL: https://ceph-mon-01:8443/
User: admin
Password: [email protected]

Log in with credentials to access the Ceph management dashboard.

Install Ceph 15 (Octopus) storage cluster on Ubuntu 20.04

Use Cephadm and Containers to enjoy Ceph Storage Cluster management on Ubuntu 20.04. Our next article will introduce adding other OSDs, deleting them, configuring RGW e.t.c. to stay in touch for updates.

The
You can download this article in PDF format to support us through the following link.

Download the guide in PDF format

turn off
The

The
The

Sidebar