Stage 1 Change Number of OSD Disks

I am setting up ceph on my cluster with 5 OSD nodes with 9 storage drives each. When I run Stage 1, salt populates the profile-default folder with only 5 osd drives in each of the .yml files. When I manually fill in the missing OSD drives into the .yml files, the cluster deploys fine. I am wondering if there is a setting/config somewhere that defines how many drives get “discovered” in Stage 1 because it seems to only look for 5 OSD disks by default.

Thanks for your help!

Hi,

I remember having seen a default ratio of 5 for OSD/journal relations, meaning one SSD would be configured to hold journals for 5 OSDs. But this should not apply here.

Do you use different disk types? Are those physical servers and disks?

I have a small lab environment and tried to reproduce it with 9 disks per node (3 nodes in total). All 9 disks were proposed as standalone OSDs.
Do you have any traces of previous installations? Are all disks wiped before running the stages? You could run the stage with debug logs:

salt-run state.orch ceph.stage.<STAGE> --log-level=debug

Regards,
Eugen