I have 2 node cluster SLES 11 SP2. One resource created as logical volume in
Volume group is running on first node.
When the second node fail, and go back to the cluster I can see in the log
file, that volume on the first node is dismounting, and one second later is
mounting again. Logical Volume is formatted as EXT3.
To narrow the problem I made second logical volume, this time formatted as
ocfs2. I observed that in the same situation volume is not running the
process of dismounting, and mounting.
Is it normal situation, or I made something wrong? I must admit that my
client prefer EXT3, as faster solution.
first of all, Ext3 is a single-node (non-cluster aware) file system, while OCFS2 is a cluster-aware file system. If you need to use the FS simultaneously on multiple nodes, you cannot use Ext3. (I have no indication that you actually need a cluster FS, this comment is just for completeness.)
That leaves the question, why is the FS unmounting? If I understood correctly, the umount/mount happens when the second node re-joins the cluster. How’s the cluster set up concerning quorum? How’s that resource set up and are there any dependencies to/from other resources, that might lead to a “resource down” situation required for migration of the other resource from the failing node to the remaining node (while delaying the migration until the cluster is active with quorum again after the second node re-joins)?