Multiple two node clusters within 4 nodes


the need is to manage multiple 2 node clusters from single node with crmsh

Tried the following cluster configurations steps:

  1. Create a 4 node cluster - node1, node2, node3, node4

  2. Changed the corosync.conf on node1 and node2 to specify node list as node1 and node2 with expected_votes: 2 and two_node: 1

  3. Changed the corosync.conf on node3 and node4 to specify node list as node3 and node4 with expected_votes: 2 and two_node: 1

  4. Created 2 sbd from same device partitions - stonith-sbd1 and stonith-sbd2

  5. stonith-sbd1 for node1 & node2

  6. stonith-sbd12for node3 & node4

  7. also added constraints for resources including stonith to run on only 2 nodes (within each 2 node cluster)

  8. tested the cluster for n/w partitions and failovers and it worked fine.

Is this a valid supported configuration?


I’ve dealt with a lot of clusters and never seen anyone use such a configuration. Most users run multiple two node clusters, or as a normally configured cluster with resource constraints. Other than not running commands specific to two clusters I don’t see much benefit here, just concerns on what behavior it may cause.
Would not recommend this configuration as its not designed to be used in such a way. Support would be best effort depending on where an issue occurred and may result in a recommendation to split it into two clusters.

Can you elaborate on what the need to control all four nodes from a single node is? Might be a simpler way to reach your goal.

The requirement is similar to Hawk UI functions in CLI form. Want to use crm for automation scripts to manage multiple clusters.
I hit this configuration accidentally while trying out few things:-)

Use salt/ansible/pure ssh to run whatever commands you need.

The whole setup is wrong and will definitely break in the future.
Keep in mind that in SLES 15 you can use ACL to control privileges by user.