CephFS, pool replacement without data loss

Goal: replace the active Ceph fileystem pool without loss of data and with the minimal loss of productivity time. This might be neccesary, for example, if ones have to reduce the number of placement groups being used for CephFS purpose.
Is that goal achievable? If yes, how to achive it?

The clue to fast data movement is here, in the Sébastien’s article. Still, the preservation of data possibility is negative.

  • The old pool removement/rename is blocked by CephFS.
  • The ceph fs new command gives clean file system as a result.

To illustrate the fast, but wrong way:

systemctl stop ceph-mds\\*.service                                   (Tried with Debian cluster)
ceph fs rm cephfs
ceph osd pool create cephfs_data_ 64 64                          <-- Instead of 128
ceph osd pool create cephfs_metadata_ 64 64                   <-- Instead of 128
rados cppool cephfs_data cephfs_data_
rados cppool cephfs_metadata cephfs_metadata_
ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
ceph osd pool delete cephfs_data cephfs_metadata --yes-i-really-really-mean-it
ceph osd pool rename cephfs_data_ cephfs_data
ceph osd pool rename cephfs_metadata_ cephfs_metadata
ceph fs [COLOR="#FF0000"]new[/COLOR] cephfs metadata data                                     <-- Data loss is here
systemctl start ceph-mds\\*.service
mount /mnt/cephfs/
mkdir /mnt/cephfs/NFS

There should be something better, instead of backup/restore or create another CephFS with full copy.


It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team