SLES11 SP3 HAE : Resource don't migration when DC reboot

HI,

I have a 2-node cluster (pip01 and pip02), for a sap server ( sap + Virtual IP + database ).
Here pip01 is DC, active/master node and start all resources.
Because this is a 2 node cluster I set the no-quorum-policy to “ignore”.

Example of my problem:when pip01 multipath has trobule,resources don’t migration.

Please someone give me some help on this.

pip02:~ # multipath -ll
tail -f /var/log/messages
Aug 26 01:02:36 pip02 multipathd: heartdisk: remaining active paths: 8
Aug 26 01:02:36 pip02 kernel: [ 1309.204821] device-mapper: multipath: Failing path 65:160.
Aug 26 01:02:36 pip02 kernel: [ 1309.216399] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:02:36 pip02 kernel: [ 1309.216524] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:02:36 pip02 kernel: [ 1309.216640] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:02:36 pip02 kernel: [ 1309.216744] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:02:36 pip02 kernel: [ 1309.216845] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:02:36 pip02 kernel: [ 1309.216972] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:02:36 pip02 kernel: [ 1309.217071] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:02:36 pip02 kernel: [ 1309.217167] alua: target port group 01 state A preferred supports tolusnA
Aug 26 01:03:26 pip02 corosync[19844]: [TOTEM ] A processor failed, forming new configuration.
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] CLM CONFIGURATION CHANGE
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] New Configuration:
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] r(0) ip(192.168.50.53)
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] Members Left:
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] r(0) ip(192.168.50.52)
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] Members Joined:
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 896: memb=1, new=0, lost=1
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] info: pcmk_peer_update: memb: pip02 1084764725
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] info: pcmk_peer_update: lost: pip01 1084764724
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] CLM CONFIGURATION CHANGE
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] New Configuration:
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] r(0) ip(192.168.50.53)
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] Members Left:
Aug 26 01:03:32 pip02 corosync[19844]: [CLM ] Members Joined:
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 896: memb=1, new=0, lost=0
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] info: pcmk_peer_update: MEMB: pip02 1084764725
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] info: ais_mark_unseen_peer_dead: Node pip01 was not seen in the previous transition
Aug 26 01:03:32 pip02 crmd[19855]: notice: ais_dispatch_message: Membership 896: quorum lost
Aug 26 01:03:32 pip02 cluster-dlm[20216]: notice: ais_dispatch_message: Membership 896: quorum lost
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] info: update_member: Node 1084764724/pip01 is now: lost
Aug 26 01:03:32 pip02 crmd[19855]: notice: crm_update_peer_state: crm_update_ais_node: Node pip01[1084764724] - state is now lost (was member)
Aug 26 01:03:32 pip02 corosync[19844]: [pcmk ] info: send_member_notification: Sending membership update 896 to 3 children
Aug 26 01:03:32 pip02 crmd[19855]: warning: check_dead_member: Our DC node (pip01) left the cluster
Aug 26 01:03:32 pip02 cluster-dlm[20216]: notice: crm_update_peer_state: crm_update_ais_node: Node pip01[1084764724] - state is now lost (was member)
Aug 26 01:03:32 pip02 cib[19850]: notice: ais_dispatch_message: Membership 896: quorum lost
Aug 26 01:03:32 pip02 corosync[19844]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Aug 26 01:03:32 pip02 cib[19850]: notice: crm_update_peer_state: crm_update_ais_node: Node pip01[1084764724] - state is now lost (was member)
Aug 26 01:03:32 pip02 cluster-dlm[20216]: update_cluster: Processing membership 896
Aug 26 01:03:32 pip02 cluster-dlm[20216]: del_configfs_node: del_configfs_node rmdir “/sys/kernel/config/dlm/cluster/comms/1084764724”
Aug 26 01:03:32 pip02 crmd[19855]: notice: do_state_transition: State transition S_NOT_DC → S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=check_dead_member ]
Aug 26 01:03:32 pip02 cluster-dlm[20216]: dlm_process_node: Removed inactive node 1084764724: born-on=892, last-seen=892, this-event=896, last-event=892
Aug 26 01:03:32 pip02 cluster-dlm[20216]: dlm_process_node: Skipped active node 1084764725: born-on=892, last-seen=896, this-event=896, last-event=892
Aug 26 01:03:32 pip02 crmd[19855]: notice: do_state_transition: State transition S_ELECTION → S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Aug 26 01:03:32 pip02 cluster-dlm[20216]: log_config: dlm:controld conf 1 0 1 memb 1084764725 join left 1084764724
Aug 26 01:03:32 pip02 corosync[19844]: [CPG ] chosen downlist: sender r(0) ip(192.168.50.53) ; members(old:2 left:1)
Aug 26 01:03:32 pip02 cluster-dlm[20216]: log_config: dlm:ls:clvmd conf 1 0 1 memb 1084764725 join left 1084764724
Aug 26 01:03:32 pip02 cluster-dlm[20216]: add_change: clvmd add_change cg 3 remove nodeid 1084764724 reason 3
Aug 26 01:03:32 pip02 cluster-dlm[20216]: add_change: clvmd add_change cg 3 counts member 1 joined 0 remove 1 failed 1
Aug 26 01:03:32 pip02 cluster-dlm[20216]: stop_kernel: clvmd stop_kernel cg 3
Aug 26 01:03:32 pip02 lvm[20250]: confchg callback. 0 joined, 1 left, 1 members
Aug 26 01:03:32 pip02 cluster-dlm[20216]: do_sysfs: write “0” to “/sys/kernel/dlm/clvmd/control”
Aug 26 01:03:32 pip02 corosync[19844]: [MAIN ] Completed service synchronization, ready to provide service.
Aug 26 01:03:32 pip02 cluster-dlm[20216]: fence_node_time: Node 1084764724/pip01 has not been shot yet
Aug 26 01:03:32 pip02 kernel: [ 1365.469273] dlm: closing connection to node 1084764724
Aug 26 01:03:32 pip02 cluster-dlm[20216]: check_fencing_done: clvmd check_fencing 1084764724 wait add 1503680183 fail 1503680612 last 0
Aug 26 01:03:32 pip02 crmd[19855]: notice: crm_update_quorum: Updating quorum status to false (call=45)
Aug 26 01:03:32 pip02 attrd[19853]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Aug 26 01:03:32 pip02 attrd[19853]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Aug 26 01:03:33 pip02 pengine[19854]: notice: unpack_config: On loss of CCM Quorum: Ignore
Aug 26 01:03:33 pip02 pengine[19854]: warning: pe_fence_node: Node pip01 will be fenced because the node is no longer part of the cluster
Aug 26 01:03:33 pip02 pengine[19854]: warning: determine_online_status: Node pip01 is unclean
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action stonith_sbd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action stonith_sbd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action dlm:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action dlm:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action clvmd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action clvmd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action active_sap_vg_new:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action active_sap_vg_new:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action fs_ASCS10_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action fs_PIP_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action fs_SCS11_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action fs_WDP_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action fs_oraarch_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action sap_ip_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: custom_action: Action sapservice_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:03:33 pip02 pengine[19854]: warning: stage6: Scheduling Node pip01 for STONITH
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Stop stonith_sbd:0 (pip01)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Stop dlm:0 (pip01)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Stop clvmd:0 (pip01)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Stop active_sap_vg_new:0 (pip01)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Move fs_ASCS10 (Started pip01 → pip02)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Move fs_PIP (Started pip01 → pip02)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Move fs_SCS11 (Started pip01 → pip02)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Move fs_WDP (Started pip01 → pip02)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Move fs_oraarch (Started pip01 → pip02)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Move sap_ip (Started pip01 → pip02)
Aug 26 01:03:33 pip02 pengine[19854]: notice: LogActions: Move sapservice (Started pip01 → pip02)
Aug 26 01:03:33 pip02 pengine[19854]: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-72.bz2
Aug 26 01:03:33 pip02 crmd[19855]: notice: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1503680613-11) derived from /var/lib/pacemaker/pengine/pe-warn-72.bz2
Aug 26 01:03:33 pip02 crmd[19855]: notice: te_fence_node: Executing reboot fencing operation (54) on pip01 (timeout=30000)
Aug 26 01:03:33 pip02 stonith-ng[19851]: notice: handle_request: Client crmd.19855.8dfd623a wants to fence (reboot) ‘pip01’ with device ‘(any)’
Aug 26 01:03:33 pip02 stonith-ng[19851]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for pip01: cefe7d78-e5c0-432d-845a-b3e0cffc1f94 (0)
Aug 26 01:03:33 pip02 stonith-ng[19851]: notice: get_capable_devices: stonith-timeout duration 30 is low for the current configuration. Consider raising it to 40 seconds
Aug 26 01:03:33 pip02 stonith-ng[19851]: notice: get_capable_devices: stonith-timeout duration 30 is low for the current configuration. Consider raising it to 40 seconds
Aug 26 01:03:33 pip02 sbd: [23581]: info: Delivery process handling /dev/mapper/heartdisk
Aug 26 01:03:33 pip02 sbd: [23581]: info: Device UUID: 32a8b212-c5e0-44ff-be7b-56ca9f433910
Aug 26 01:03:33 pip02 sbd: [23581]: info: Writing reset to node slot pip01
Aug 26 01:03:33 pip02 sbd: [23581]: info: Messaging delay: 40
Aug 26 01:03:58 pip02 sshd[23631]: Accepted keyboard-interactive/pam for root from 10.8.10.101 port 54715 ssh2
Aug 26 01:04:03 pip02 stonith-ng[19851]: notice: stonith_action_async_done: Child process 23576 performing action ‘reboot’ timed out with signal 15
Aug 26 01:04:03 pip02 stonith-ng[19851]: error: log_operation: Operation ‘reboot’ [23576] (call 2 from crmd.19855) for host ‘pip01’ with device ‘stonith_sbd:0’ returned: -62 (Timer expired). Trying: stonith_sbd
Aug 26 01:04:03 pip02 stonith-ng[19851]: warning: log_operation: stonith_sbd:0:23576 [ Performing: stonith -t external/sbd -T reset pip01 ]
Aug 26 01:04:03 pip02 stonith-ng[19851]: warning: log_operation: stonith_sbd:0:23576 [ failed: pip01 0.05859375 ]
Aug 26 01:04:03 pip02 sbd: [23686]: info: Delivery process handling /dev/mapper/heartdisk
Aug 26 01:04:03 pip02 sbd: [23686]: info: Device UUID: 32a8b212-c5e0-44ff-be7b-56ca9f433910
Aug 26 01:04:03 pip02 sbd: [23686]: info: Writing reset to node slot pip01
Aug 26 01:04:03 pip02 sbd: [23686]: info: Messaging delay: 40
Aug 26 01:04:33 pip02 stonith-ng[19851]: notice: stonith_action_async_done: Child process 23681 performing action ‘reboot’ timed out with signal 15
Aug 26 01:04:33 pip02 stonith-ng[19851]: error: log_operation: Operation ‘reboot’ [23681] (call 2 from crmd.19855) for host ‘pip01’ with device ‘stonith_sbd’ returned: -62 (Timer expired)
Aug 26 01:04:33 pip02 stonith-ng[19851]: warning: log_operation: stonith_sbd:23681 [ Performing: stonith -t external/sbd -T reset pip01 ]
Aug 26 01:04:33 pip02 stonith-ng[19851]: warning: log_operation: stonith_sbd:23681 [ failed: pip01 0.05859375 ]
Aug 26 01:04:33 pip02 stonith-ng[19851]: notice: remote_op_timeout: Action reboot (cefe7d78-e5c0-432d-845a-b3e0cffc1f94) for pip01 (crmd.19855) timed out
Aug 26 01:04:33 pip02 stonith-ng[19851]: error: remote_op_done: Operation reboot of pip01 by pip02 for crmd.19855@pip02.cefe7d78: Timer expired
Aug 26 01:04:33 pip02 crmd[19855]: notice: tengine_stonith_callback: Stonith operation 2/54:0:0:28d054f9-b607-4223-9dce-1b765685da77: Timer expired (-62)
Aug 26 01:04:33 pip02 crmd[19855]: notice: tengine_stonith_callback: Stonith operation 2 for pip01 failed (Timer expired): aborting transition.
Aug 26 01:04:33 pip02 crmd[19855]: notice: tengine_stonith_notify: Peer pip01 was not terminated (st_notify_fence) by pip02 for pip02: Timer expired (ref=cefe7d78-e5c0-432d-845a-b3e0cffc1f94) by client crmd.19855
Aug 26 01:04:33 pip02 crmd[19855]: notice: run_graph: Transition 0 (Complete=1, Pending=0, Fired=0, Skipped=28, Incomplete=2, Source=/var/lib/pacemaker/pengine/pe-warn-72.bz2): Stopped
Aug 26 01:04:33 pip02 pengine[19854]: notice: unpack_config: On loss of CCM Quorum: Ignore
Aug 26 01:04:33 pip02 pengine[19854]: warning: pe_fence_node: Node pip01 will be fenced because the node is no longer part of the cluster
Aug 26 01:04:33 pip02 pengine[19854]: warning: determine_online_status: Node pip01 is unclean
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action stonith_sbd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action stonith_sbd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action dlm:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action dlm:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action clvmd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action clvmd:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action active_sap_vg_new:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action active_sap_vg_new:0_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action fs_ASCS10_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action fs_PIP_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action fs_SCS11_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action fs_WDP_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action fs_oraarch_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action sap_ip_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: custom_action: Action sapservice_stop_0 on pip01 is unrunnable (offline)
Aug 26 01:04:33 pip02 pengine[19854]: warning: stage6: Scheduling Node pip01 for STONITH
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Stop stonith_sbd:0 (pip01)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Stop dlm:0 (pip01)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Stop clvmd:0 (pip01)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Stop active_sap_vg_new:0 (pip01)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Move fs_ASCS10 (Started pip01 → pip02)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Move fs_PIP (Started pip01 → pip02)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Move fs_SCS11 (Started pip01 → pip02)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Move fs_WDP (Started pip01 → pip02)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Move fs_oraarch (Started pip01 → pip02)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Move sap_ip (Started pip01 → pip02)
Aug 26 01:04:33 pip02 pengine[19854]: notice: LogActions: Move sapservice (Started pip01 → pip02)
Aug 26 01:04:33 pip02 pengine[19854]: warning: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-warn-72.bz2
Aug 26 01:04:33 pip02 crmd[19855]: notice: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1503680673-12) derived from /var/lib/pacemaker/pengine/pe-warn-72.bz2
Aug 26 01:04:33 pip02 crmd[19855]: notice: te_fence_node: Executing reboot fencing operation (54) on pip01 (timeout=30000)
Aug 26 01:04:33 pip02 stonith-ng[19851]: notice: handle_request: Client crmd.19855.8dfd623a wants to fence (reboot) ‘pip01’ with device ‘(any)’
Aug 26 01:04:33 pip02 stonith-ng[19851]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for pip01: 3f2e0e9e-6155-457e-ac24-b38abc4b1664 (0)
Aug 26 01:04:33 pip02 stonith-ng[19851]: notice: get_capable_devices: stonith-timeout duration 30 is low for the current configuration. Consider raising it to 40 seconds
Aug 26 01:04:33 pip02 stonith-ng[19851]: notice: get_capable_devices: stonith-timeout duration 30 is low for the current configuration. Consider raising it to 40 seconds
Aug 26 01:04:33 pip02 sbd: [23786]: info: Delivery process handling /dev/mapper/heartdisk
Aug 26 01:04:33 pip02 sbd: [23786]: info: Device UUID: 32a8b212-c5e0-44ff-be7b-56ca9f433910
Aug 26 01:04:33 pip02 sbd: [23786]: info: Writing reset to node slot pip01
Aug 26 01:04:33 pip02 sbd: [23786]: info: Messaging delay: 40
Aug 26 01:04:45 pip02 stonith-ng[19851]: notice: remote_op_timeout: Action reboot (cefe7d78-e5c0-432d-845a-b3e0cffc1f94) for pip01 (crmd.19855) timed out
Aug 26 01:04:45 pip02 stonith-ng[19851]: error: remote_op_done: Already sent notifications for ‘reboot of pip01 by pip02’ (for=crmd.19855@pip02.cefe7d78, state=4): Timer expired
Aug 26 01:05:03 pip02 stonith-ng[19851]: notice: stonith_action_async_done: Child process 23781 performing action ‘reboot’ timed out with signal 15
Aug 26 01:05:03 pip02 stonith-ng[19851]: error: log_operation: Operation ‘reboot’ [23781] (call 3 from crmd.19855) for host ‘pip01’ with device ‘stonith_sbd:0’ returned: -62 (Timer expired). Trying: stonith_sbd
Aug 26 01:05:03 pip02 stonith-ng[19851]: warning: log_operation: stonith_sbd:0:23781 [ Performing: stonith -t external/sbd -T reset pip01 ]
Aug 26 01:05:03 pip02 stonith-ng[19851]: warning: log_operation: stonith_sbd:0:23781 [ failed: pip01 0.05859375 ]
Aug 26 01:05:03 pip02 sbd: [23849]: info: Delivery process handling /dev/mapper/heartdisk
Aug 26 01:05:03 pip02 sbd: [23849]: info: Device UUID: 32a8b212-c5e0-44ff-be7b-56ca9f433910
Aug 26 01:05:03 pip02 sbd: [23849]: info: Writing reset to node slot pip01
Aug 26 01:05:03 pip02 sbd: [23849]: info: Messaging delay: 40

lambert,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team
http://forums.suse.com