Goal: replace the active Ceph fileystem pool without loss of data and with the minimal loss of productivity time. This might be neccesary, for example, if ones have to reduce the number of placement groups being used for CephFS purpose.
Is that goal achievable? If yes, how to achive it?
The clue to fast data movement is here, in the Sébastien’s article. Still, the preservation of data possibility is negative.
- The old pool removement/rename is blocked by CephFS.
- The ceph fs new command gives clean file system as a result.
To illustrate the fast, but wrong way:
systemctl stop ceph-mds\\*.service (Tried with Debian cluster)
ceph fs rm cephfs
ceph osd pool create cephfs_data_ 64 64 <-- Instead of 128
ceph osd pool create cephfs_metadata_ 64 64 <-- Instead of 128
rados cppool cephfs_data cephfs_data_
rados cppool cephfs_metadata cephfs_metadata_
ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
ceph osd pool delete cephfs_data cephfs_metadata --yes-i-really-really-mean-it
ceph osd pool rename cephfs_data_ cephfs_data
ceph osd pool rename cephfs_metadata_ cephfs_metadata
ceph fs [COLOR="#FF0000"]new[/COLOR] cephfs metadata data <-- Data loss is here
systemctl start ceph-mds\\*.service
mount /mnt/cephfs/
mkdir /mnt/cephfs/NFS
There should be something better, instead of backup/restore or create another CephFS with full copy.