Does any technical limitation for number of HDD per OSD node exist? If yes, which is the limit?
Consider the customer’s application goal forces to shift the balance between cluster size and node failure impact into ‘size’ side. Today it is not hard to build a server, putting into node 4 - 6 SAS controllers, supporting up to 256 target devices each. It’s also possible to daisy-chain SAS enclosures with up to 90 HDDs inside. Thus, the 1000+ physical HDDs for Ceph node - it’s not a dream. RAM is not a problem as well. Xeon-v4-based dual-socket server might be filled with up to 3TB of RAM, thus, it may store up to 1500 OSD processes (2GB each).
From the other hand, look at the document. The 65 - 71 and 128 - 135 opcode blocks are allocated for HDD mapping, thus, we have 16 allocated blocks. The minor opcode has range 0…255. Each disk might get 16 consecutive minor opcodes (first represents the entire disk and last 15 for partitions). Number of possible drives is = 16 * ( 256 / 16 ) = 256.
If SES(Ceph) uses GUID approach to buld the cluster, it’s not clear, how to manage it in ceph.conf file and command line. To configure separately journalling drives and data drives, for example.
Will SES3/4 be able to recognize, accesss and manage huge amount of drives?