Ceph & Crowbar

Hi Folks,

We have a Suse Cloud 4 deployment on a customer, in first deployments we use Ceph with the 1st disk available setting at crowbar. After few weeks we have added new disks manually to the cluster so now we want to change the settings at crowbar but i don’t know if we could do without loose data at the osd added manually. Anyone could provide us any feedback about this?

Regards,
Luis

It shouldn’t trash any existing OSDs – the way the barclamp works is to deploys OSDs on unclaimed disks. So if you initially deployed in “first available” mode, the first disk of each storage node is taken for an OSD. If you later switch to “all available”, it will take any remaining unclaimed disks in all storage nodes and turn them into OSDs too. The first disk being already in use (and thus already claimed) should remain untouched.