Hi zabidin2,
What is resources for? I’m not expert. Just junior person
ah, ok - then a different approach is required You’re in for a steep learning curve.
This also explains why most answers are still missing, so I’ll be more precise in my questions.
First of all, please check in YaST (Software - Add-on products) if the “SUSE Linux Enterprise High Availability Extension 11 SP1” is installed, so that we know if you have an installation based on official packages (OCFS2, clustering) or if those parts are from some other source.
Then some statements concerning clustering:
In an earlier post you wrote
I’m using heartbeat to control my resource
but your crm_mon output shows that your “cluster” consists of only one node and has no resources configured:
[CODE]Last updated: Thu Jul 11 10:48:26 2013
Stack: openais
Current DC: svr-web1 - partition WITHOUT quorum
Version: 1.1.2-2e096a41a5f9e184a1c1537c82c6da1093698eb5
1 Nodes configured, 2 expected votes
0 Resources configured.
Online: [ svr-web1 ][/CODE]
So currently, there is no resource - and the main purpose of the clustering software (moving “resources” between nodes) is voided, as there is only a single node.
What is resources for?
“Resources” are the “entities controlled by the cluster management”. It can be IP addresses (that need to be moved from serverA to serverB in case serverA fails), file system mounts (which i.e. can be active on all cluster nodes in parallel, in case of OCFS2), processes and subsystems (like a MySQL DMBS that may only be active on one node at a time, or httpd running in parallel on more than one node, for load sharing).
Back to your situation: You in fact currently seem to be running two clusters with separate cluster management stacks:
- OCFS2
- Pacemaker/openais
The usual recommendation is to use only a single cluster stack, that’s why OCFS2 can use either it’s own cluster stack or plug in to Pacemaker:
# cat /sys/fs/ocfs2/cluster_stack
pcmk
(that is taken from one of our installations)
Your OCFS2 (o2cb service and file system(s) ) then is configured as resources of your Pacemaker cluster. This is covered quite well in the SLES documentation (see i.e. https://www.suse.com/documentation/sle_ha/singlehtml/book_sleha/book_sleha.html#cha.ha.ocfs2), I recommend reading the HAE guide as a starter.
When i search, the document show only how to configure on SLES 11 SP3, i didn’t found for SP1. When i install cluster using yast, it’s totaly different from documentation.
Might that be because most of the cluster is usually configured outside of YaST? The tools described in the HAE guide, especially “crm”, are used from the command line, which to many administrators is the preferred way to interact with Linux systems anyhow. If you don’t have HAE installed (see the initial question), the “YaST parts” will be different - but the basic configuration tasks (setting up OCFS2 & Pacemaker) are extremely similar.
Please take some time to read the guide, try the steps mentioned there to set up Pacemaker to control your OCFS2 filesystem(s), and feel free to ask here whenever you do not understand something written there or if you feel that SP1 is totally different from what’s described in the manual. I had a quick browse through the text any nothing SP3-specific caught my eye, but SP1 is a bit older, so I may have overlooked something.
Important: Get yourself a test environment - do not experiment with clustering on production servers. Clusters tend to behave differently from the way that was expected - and that includes taking down the whole server.
Something else caught my eye when reading through your messages:
I have 2 server (server A and server B). Both connect to san storage using ocfs2
I install drdb, drbd-heartbeat and drdb-pacemaker.
Since you classify yourself as “junior”, I have to question those statements: They seem not to go together well: From how I see it, you have either
- SAN storage, accessible to all cluster nodes via some storage protocol (Fiber channel, iSCSI, shared SCSI)
- or local storage on (two) servers, that is sychronized and presented to upper layers as a single storage, via DRBD
So if you already have SAN storage available (“accessing the same block device from multiple servers”), there’s no need for DRBD. For the sake of clearness in future discussion, could you please clarify if DRDB is a required part of your setup or if it was only installed because typical cluster documentations mention this? (the typical “small cluster” has two servers and no SAN… then DRBD can be part of the picture)
Regards,
Jens