Creating, maintaining, and removing Ceph storage pools.
This example shows the creation and mounting of an rbd device on ncn-m001.
NOTE: The commands to create and delete pools or rbd devices must be run from a master node or one of the first three storage nodes (ncn-s001, ncn-s002, or ncn-s003).
The below example will create a storage pool name csm-release. The pool name can be changed to better reflect any use cases outside of support for upgrades.
The 3 3 arguments can be left unchanged. For more information on their meaning and possible alternative values, see the Ceph product documentation.
(ncn-ms#) Create the storage pool.
ceph osd pool create csm-release 3 3
Output:
pool 'csm-release' created
(ncn-ms#) Enable rbd on the new pool.
ceph osd pool application enable csm-release rbd
Example output:
enabled application 'rbd' on pool 'csm-release'
(ncn-ms#) Set a quota on the new pool.
ceph osd pool set-quota csm-release max_bytes 500G
Example output:
set-quota max_bytes = 536870912000 for pool csm-release
(ncn-ms#) View the quotas on the new pool.
ceph osd pool get-quota csm-release
Example output:
quotas for pool 'csm-release':
max objects: N/A
max bytes : 500 GiB (current num bytes: 0 bytes)
NOTES:
rbd deviceIMPORTANT:
rbd device requires proper access and must be run from a master node or one of the first three storage nodes (ncn-s001, ncn-s002, or ncn-s003).(ncn-ms#) Create the rbd device.
rbd create -p csm-release release_version --size 100G
This command gives no output when successful.
(ncn-ms#) Map the device.
rbd map -p csm-release release_version
Example output:
/dev/rbd0
(ncn-ms#) Show mapped rbd devices.
rbd showmapped
Example output:
id pool namespace image snap device
0 csm-release release_version - /dev/rbd0
IMPORTANT NOTE:
rbd devices mapped via Ceph provisioner.
rbd device is being captured for the following steps.rbd device(ncn#) Format the device with a file system.
mkfs.ext4 /dev/rbd0
Example output:
mke2fs 1.43.8 (1-Jan-2018)
Discarding device blocks: done
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: d5fe6df4-a0ab-49bc-8d49-9cc62700915d
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: mkdir done
(ncn#) Create a directory for the mount point.
mkdir -pv /etc/cray/csm/csm-release
The output from this command will vary depending on whether or not the directory already exists. Example output:
mkdir: created directory '/etc/cray/csm'
mkdir: created directory '/etc/cray/csm/csm-release'
(ncn#) Mount the rbd device.
mount /dev/rbd0 /etc/cray/csm/csm-release/
This command gives no output when successful.
(ncn#) Validate the mount.
mountpoint /etc/cray/csm/csm-release/
Example output:
/etc/cray/csm/csm-release/ is a mountpoint
rbd device to another node(ncn#) Unmap the device on the node where it is currently mapped.
Unmount the rbd device.
umount /etc/cray/csm/csm-release
Unmap the rbd device.
rbd unmap -p csm-release release_version
Show the rbd mappings to verify that it has been removed.
rbd showmapped
NOTE: There should be no output from the above unless other rbd devices are mapped on the node.
(ncn#) Map and mount the device on the destination node ((that is, the node where the rbd device is being remapped to).
Map the rbd device.
rbd map -p csm-release release_version
Example output:
/dev/rbd0
Show the rbd mappings.
rbd showmapped
Example output:
id pool namespace image snap device
0 csm-release release_version - /dev/rbd0
Create the mount point directory, if it does not already exist.
mkdir -pv /etc/cray/csm/csm-release
The output from this command will vary depending on whether or not the directory already exists.
Mount the rbd device over the mount point.
mount /dev/rbd0 /etc/cray/csm/csm-release
This command gives no output when successful.
Validate the mount.
mountpoint /etc/cray/csm/csm-release/
Example output:
/etc/cray/csm/csm-release/ is a mountpoint
rbd device(ncn#) Unmount the rbd device.
umount /etc/cray/csm/csm-release
(ncn#) Unmap the rbd device.
rbd unmap -p csm-release release_version
(ncn#) Show the rbd mappings to verify that it has been removed.
rbd showmapped
NOTE: There should be no output from the above unless other rbd devices are mapped on the node.
(ncn#) Remove the rbd device.
rbd remove csm-release/release_version
Output:
Removing image: 100% complete...done.
CRITICAL NOTE: This will permanently delete data.
(ncn-ms#) Check to see if the cluster is allowing pool deletion.
ceph config get mon mon_allow_pool_delete
Example output:
true
If the above command shows false, then enable it using the following command:
ceph config set mon mon_allow_pool_delete true
(ncn-sm#) Remove the pool.
ceph osd pool rm csm-release csm-release --yes-i-really-really-mean-it
Example output:
pool 'csm-release' removed