Create a pool in a Ceph storage cluster

Ranch
You can support us by downloading this article in PDF format via the link below.

Download the guide as a PDF

turn off
Ranch

Ranch
Ranch

Ceph Storage is a free, open source software-defined distributed storage solution designed to scale at scale for modern data analytics, artificial intelligence (AI), machine learning (ML), data analytics and emerging mission-critical workloads. In this article, we will discuss how to create a Ceph Pool using a custom number of placement groups (PGs).

In Ceph terms, a placement group (PG) is Shards or shards Place objects as a group into a logical object pool in the OSD. When Ceph stores data in the OSD, placement groups reduce the amount of metadata per object.

A large number of placement groups (for example, 100 per OSD) can bring better balance. The Ceph client will calculate which placement group the object should be in. It applies operations by hashing the object ID and based on the number of PGs in the defined pool and the pool ID. See Map PG to OSD For more information.

Calculate the total number of placement groups.

             (OSDs * 100)
Total PGs =  ------------
              pool size

For example, suppose your cluster has 9 OSD, the default pool size is 3. So your PG will be

             9 * 100
Total PGs =  ------------ = 300
              3

Create a pool

The syntax for creating a pool is:

ceph osd pool create {pool-name} {pg-num}

where:

  • {pool-name} – The name of the pool. It must be unique.
  • {pg-num} – The total number of placement groups for the pool.

I will create a new pool named k8s-uat where the number of placement groups is 100

$ sudo ceph osd pool create k8s-uat 100
pool 'k8s-uat' created

The available pools are now listed to confirm that it has been created.

$ sudo ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
5 k8s-uat

Associating a pool to an application

The pool needs to be associated with the application before it can be used. Automatically associate pools used with CephFS or pools created by RGW automatically.

--- Ceph Filesystem ---
$ sudo ceph osd pool application enable  cephfs

--- Ceph Block Device ---
$ sudo ceph osd pool application enable  rbd

--- Ceph Object Gateway ---
 $ sudo ceph osd pool application enable  rgw

example:

$ sudo ceph osd pool application enable k8s-uat-rbd rbd
enabled application 'rbd' on pool 'k8s-uat-rbd'

Pools suitable for RBD should be used rbd tool:

sudo rbd pool init k8s-uat-rbd

To disable the application, use:

ceph osd pool application disable   {--yes-i-really-mean-it}

To get I / O information for a specific pool or all pools, execute:

$ sudo ceph osd pool stats [{pool-name}]

Execute from Ceph Dashboard

Log in to your Ceph Management Dashboard and create a new Pool – Pool> create

Create a pool in a Ceph storage cluster

Delete pool

To delete the pool:

sudo ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]

More articles on Ceph will be published in the coming weeks. keep in touch.

Ranch
You can support us by downloading this article in PDF format via the link below.

Download the guide as a PDF

turn off
Ranch

Ranch
Ranch

Sidebar