We have one requirement for installing one of the software on two nodes by using shared file system (block). However, as we are running SLES 12.x VMs on Azure platform, we don’t have option of using centralized shared storage system like iSCSI SAN or FC.
However, there were discussions going on to build OCFS2 along with DRBD (dual-primary) keeping expectations, if one nodes goes down, the other nodes will be available to access the software via web. They are fine to switch ips in the browser whenever they want to access software. But, without pacemaker can we run clustered file system.
Actually you can, but why not using it in a more reliable way ?
You can have a cluster of 3 nodes that are syncing data over drbd and one of the nodes is presenting an iSCSI Target and the VIP will be managed by pacemaker which will be using Azure LB. (I think the resource was named azure-lb)
The 2 nodes that will be hosting the software will need some tunings for the iSCSI to be used under multipath, but with OCFS they will be able to access the data simultaneously.
Of course for cost reductions you can merge the 2 clusters into a single one - so in total you will need 3 nodes at all (pacemaker and drbd are prone to dual fencing and split-brain issues in bi-node clusters).
Any reason not to use SLES 15 SP2 ? Keep in mind that HA in 15.X has some nice features like corosync using SCTP, using qnet for dual-node clusters (so you got 2 nodes + 1 deamon running on another VM and that can be quorum for multiple clusters), and of course the biggest benefit → you won’t have to migrate your cluster in the near future (eventually the support for 12.5 will end ).
Thanks for you comments. Finally, the requirement is Shared “Network File System” with DRBD active only on one node. This is completely miscommunication from other team who are calling the same as “Cluster File System”