Automatic SAP HANA takeover does not start

I’ve configured a cluster with two VM with SLES for SAP.
I’ve configured a SBD device with iCluster Target/Initiator with a block device on a third VM.
Both the VMs have a SAP HANA instance configured with an active System Replication.
Making a test for automatic takeover, I shut down a VM.
Cluster seems run correctly but automatic takeover does not start.
Could give someone me an help?

Here the output of ‘crm configure show’ command:

node hana-1 \
attributes hana_hdd_vhost=“hana-1” hana_hdd_site=“primary” hana_hdd_srmode=“sync” hana_hdd_remoteHost=“hana-2” lpa_hdd_lpt=“1442396169” standby=“off”
node hana-2 \
attributes hana_hdd_vhost=“hana-2” lpa_hdd_lpt=“30” hana_hdd_remoteHost=“hana-1” hana_hdd_site=“secondary” hana_hdd_srmode=“sync” standby=“off” maintenance=“off”
primitive rsc_SAPHanaTopology_HDD_HDB00 ocf:suse:SAPHanaTopology \
params SID=“HDD” InstanceNumber=“00” \
op monitor interval=“10” timeout=“600” \
op start interval=“0” timeout=“600” \
op stop interval=“0” timeout=“300”
primitive rsc_SAPHana_HDD_HDB00 ocf:suse:SAPHana \
params SID=“HDD” InstanceNumber=“00” PREFER_SITE_TAKEOVER=“true” AUTOMATED_REGISTER=“false” DUPLICATE_PRIMARY_TIMEOUT=“7200” \
op start timeout=“3600” interval=“0” \
op stop timeout=“3600” interval=“0” \
op promote timeout=“3600” interval=“0” \
op monitor role=“Master” timeout=“700” interval=“60” \
op monitor role=“Slave” timeout=“700” interval=“61”
primitive rsc_ip_HDD_HDB00 ocf:heartbeat:IPaddr2 \
params ip=“192.168.129.10” \
op start interval=“0” timeout=“20” \
op stop interval=“0” timeout=“20” \
op monitor interval=“10” timeout=“20” \
meta target-role=“Started”
primitive stonith-sbd stonith:external/sbd \
meta target-role=“Started”
ms msl_SAPHana_HDD_HDB00 rsc_SAPHana_HDD_HDB00 \
meta clone-max=“2” clone-node-max=“1” interleave=“true” target-role=“Started”
clone cln_SAPHanaTopology_HDD_HDB00 rsc_SAPHanaTopology_HDD_HDB00 \
meta is-managed=“true” clone-node-max=“1” interleave=“true” target-role=“Started” clone-max=“2”
location cli-prefer-msl_SAPHana_HDD_HDB00 msl_SAPHana_HDD_HDB00 inf: hana-2
colocation col_saphana_ip_HDD_HDB00 2000: rsc_ip_HDD_HDB00:Started msl_SAPHana_HDD_HDB00:Master
order ord_SAPHana_HDD_HDB00 2000: cln_SAPHanaTopology_HDD_HDB00 msl_SAPHana_HDD_HDB00
property $id=“cib-bootstrap-options” \
stonith-enabled=“true” \
no-quorum-policy=“ignore” \
placement-strategy=“balanced” \
dc-version=“1.1.11-3ca8c3b” \
cluster-infrastructure=“classic openais (with plugin)” \
expected-quorum-votes=“2” \
last-lrm-refresh=“1442241921”
rsc_defaults $id=“rsc-options” \
resource-stickiness=“1” \
migration-threshold=“3”
op_defaults $id=“op-options” \
timeout=“600” \
record-pending=“true”

Here the log taken from /var/log/messages

Sep 16 15:15:36 hana-2 corosync[5998]: [TOTEM ] A processor failed, forming new configuration.
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] CLM CONFIGURATION CHANGE
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] New Configuration:
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] r(0) ip(192.168.129.131)
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] Members Left:
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] r(0) ip(192.168.129.130)
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] Members Joined:
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 192: memb=1, new=0, lost=1
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] info: pcmk_peer_update: memb: hana-2 1084785027
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] info: pcmk_peer_update: lost: hana-1 1084785026
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] CLM CONFIGURATION CHANGE
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] New Configuration:
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] r(0) ip(192.168.129.131)
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] Members Left:
Sep 16 15:15:42 hana-2 corosync[5998]: [CLM ] Members Joined:
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 192: memb=1, new=0, lost=0
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] info: pcmk_peer_update: MEMB: hana-2 1084785027
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] info: ais_mark_unseen_peer_dead: Node hana-1 was not seen in the previous transition
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] info: update_member: Node 1084785026/hana-1 is now: lost
Sep 16 15:15:42 hana-2 corosync[5998]: [pcmk ] info: send_member_notification: Sending membership update 192 to 3 children
Sep 16 15:15:42 hana-2 corosync[5998]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep 16 15:15:42 hana-2 corosync[5998]: [CPG ] chosen downlist: sender r(0) ip(192.168.129.131) ; members(old:2 left:1)
Sep 16 15:15:42 hana-2 corosync[5998]: [MAIN ] Completed service synchronization, ready to provide service.
Sep 16 15:15:42 hana-2 crmd[6008]: notice: plugin_handle_membership: Membership 192: quorum lost
Sep 16 15:15:42 hana-2 cib[6003]: notice: plugin_handle_membership: Membership 192: quorum lost
Sep 16 15:15:42 hana-2 stonith-ng[6004]: notice: plugin_handle_membership: Membership 192: quorum lost
Sep 16 15:15:42 hana-2 crmd[6008]: warning: match_down_event: No match for shutdown action on hana-1
Sep 16 15:15:42 hana-2 cib[6003]: notice: crm_update_peer_state: plugin_handle_membership: Node hana-1[1084785026] - state is now lost (was member)
Sep 16 15:15:42 hana-2 stonith-ng[6004]: notice: crm_update_peer_state: plugin_handle_membership: Node hana-1[1084785026] - state is now lost (was member)
Sep 16 15:15:42 hana-2 crmd[6008]: notice: peer_update_callback: Stonith/shutdown of hana-1 not matched
Sep 16 15:15:42 hana-2 crmd[6008]: notice: crm_update_peer_state: plugin_handle_membership: Node hana-1[1084785026] - state is now lost (was member)
Sep 16 15:15:42 hana-2 crmd[6008]: warning: match_down_event: No match for shutdown action on hana-1
Sep 16 15:15:42 hana-2 crmd[6008]: notice: peer_update_callback: Stonith/shutdown of hana-1 not matched
Sep 16 15:15:42 hana-2 crmd[6008]: notice: crm_update_quorum: Updating quorum status to false (call=3257)
Sep 16 15:15:42 hana-2 sbd: [5981]: WARN: CIB: We do NOT have quorum!
Sep 16 15:15:42 hana-2 sbd: [5979]: WARN: Pacemaker health check: UNHEALTHY
Sep 16 15:15:42 hana-2 crmd[6008]: notice: do_state_transition: State transition S_IDLE → S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Sep 16 15:15:42 hana-2 crmd[6008]: notice: crm_update_quorum: Updating quorum status to false (call=3262)
Sep 16 15:15:42 hana-2 attrd[6006]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Sep 16 15:15:42 hana-2 attrd[6006]: notice: attrd_trigger_update: Sending flush op to all hosts for: hana_hdd_roles (4:S:master1:master:worker:master)
Sep 16 15:15:42 hana-2 pengine[6007]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 16 15:15:42 hana-2 pengine[6007]: warning: pe_fence_node: Node hana-1 will be fenced because the node is no longer part of the cluster
Sep 16 15:15:42 hana-2 pengine[6007]: warning: determine_online_status: Node hana-1 is unclean
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_demote_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_demote_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_demote_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_demote_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_demote_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHana_HDD_HDB00:0_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHanaTopology_HDD_HDB00:0_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_SAPHanaTopology_HDD_HDB00:0_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: custom_action: Action rsc_ip_HDD_HDB00_stop_0 on hana-1 is unrunnable (offline)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: stage6: Scheduling Node hana-1 for STONITH
Sep 16 15:15:42 hana-2 pengine[6007]: notice: LogActions: Demote rsc_SAPHana_HDD_HDB00:0 (Master → Stopped hana-1)
Sep 16 15:15:42 hana-2 pengine[6007]: notice: LogActions: Stop rsc_SAPHanaTopology_HDD_HDB00:0 (hana-1)
Sep 16 15:15:42 hana-2 pengine[6007]: notice: LogActions: Move rsc_ip_HDD_HDB00 (Started hana-1 → hana-2)
Sep 16 15:15:42 hana-2 crmd[6008]: notice: do_te_invoke: Processing graph 2502 (ref=pe_calc-dc-1442409342-2814) derived from /var/lib/pacemaker/pengine/pe-warn-18.bz2
Sep 16 15:15:42 hana-2 crmd[6008]: notice: te_fence_node: Executing reboot fencing operation (47) on hana-1 (timeout=60000)
Sep 16 15:15:42 hana-2 crmd[6008]: notice: te_rsc_command: Initiating action 58: notify rsc_SAPHana_HDD_HDB00_pre_notify_demote_0 on hana-2 (local)
Sep 16 15:15:42 hana-2 attrd[6006]: notice: attrd_trigger_update: Sending flush op to all hosts for: hana_hdd_clone_state (DEMOTED)
Sep 16 15:15:42 hana-2 pengine[6007]: warning: process_pe_message: Calculated Transition 2502: /var/lib/pacemaker/pengine/pe-warn-18.bz2
Sep 16 15:15:42 hana-2 stonith-ng[6004]: notice: handle_request: Client crmd.6008.24533c34 wants to fence (reboot) ‘hana-1’ with device ‘(any)’
Sep 16 15:15:42 hana-2 stonith-ng[6004]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for hana-1: 39a991df-b4e2-4c9c-a37e-00416c0ad4cb (0)
Sep 16 15:15:42 hana-2 attrd[6006]: notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (0)
Sep 16 15:15:42 hana-2 attrd[6006]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Sep 16 15:15:42 hana-2 attrd[6006]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-rsc_SAPHana_HDD_HDB00 (1442397820)
Sep 16 15:15:42 hana-2 crmd[6008]: notice: process_lrm_event: LRM operation rsc_SAPHana_HDD_HDB00_notify_0 (call=131, rc=0, cib-update=0, confirmed=true) ok
Sep 16 15:15:42 hana-2 stonith-ng[6004]: notice: can_fence_host_with_device: stonith-sbd can fence hana-1: dynamic-list
Sep 16 15:15:42 hana-2 sbd: [20863]: info: Delivery process handling /dev/sdc
Sep 16 15:15:42 hana-2 sbd: [20863]: info: Device UUID: 968741ca-3f30-41b6-ba0b-fb3830e8fb44
Sep 16 15:15:42 hana-2 sbd: [20863]: info: Writing reset to node slot hana-1
Sep 16 15:15:42 hana-2 sbd: [20863]: info: Messaging delay: 10
Sep 16 15:15:43 hana-2 crmd[6008]: notice: handle_request: Current ping state: S_TRANSITION_ENGINE
Sep 16 15:15:43 hana-2 su: (to hddadm) root on none
Sep 16 15:15:46 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[20871]: INFO: DEC: site=secondary, mode=sync, MAPPING=hana-1, hanaRemoteHost=
Sep 16 15:15:46 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[20871]: INFO: RA ==== begin action monitor_clone ( 0.149.3) ====
Sep 16 15:15:46 hana-2 su: (to hddadm) root on none
Sep 16 15:15:49 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[20871]: INFO: RA ==== end action monitor_clone with rc=0 ( 0.149.3) (6s)====
Sep 16 15:15:52 hana-2 sbd: [20863]: info: reset successfully delivered to hana-1
Sep 16 15:15:52 hana-2 sbd: [20862]: info: Message successfully delivered.
Sep 16 15:15:53 hana-2 SAPHana(rsc_SAPHana_HDD_HDB00)[21131]: INFO: RA ==== begin action monitor_clone (0.149.4) ====
Sep 16 15:15:53 hana-2 su: (to hddadm) root on none
Sep 16 15:15:53 hana-2 stonith-ng[6004]: notice: log_operation: Operation ‘reboot’ [20851] (call 5 from crmd.6008) for host ‘hana-1’ with device ‘stonith-sbd’ returned: 0 (OK)
Sep 16 15:15:53 hana-2 stonith-ng[6004]: notice: remote_op_done: Operation reboot of hana-1 by hana-2 for crmd.6008@hana-2.39a991df: OK
Sep 16 15:15:53 hana-2 crmd[6008]: notice: tengine_stonith_callback: Stonith operation 5/47:2502:0:1524646f-96f8-46bf-80ac-1cf9c95aadc1: OK (0)
Sep 16 15:15:53 hana-2 crmd[6008]: notice: tengine_stonith_notify: Peer hana-1 was terminated (reboot) by hana-2 for hana-2: OK (ref=39a991df-b4e2-4c9c-a37e-00416c0ad4cb) by client crmd.6008
Sep 16 15:15:53 hana-2 crmd[6008]: notice: te_rsc_command: Initiating action 44: start rsc_ip_HDD_HDB00_start_0 on hana-2 (local)
Sep 16 15:15:53 hana-2 crmd[6008]: notice: te_rsc_command: Initiating action 59: notify rsc_SAPHana_HDD_HDB00_post_notify_demote_0 on hana-2 (local)
Sep 16 15:15:53 hana-2 crmd[6008]: notice: process_lrm_event: LRM operation rsc_SAPHana_HDD_HDB00_notify_0 (call=133, rc=0, cib-update=0, confirmed=true) ok
Sep 16 15:15:53 hana-2 crmd[6008]: notice: te_rsc_command: Initiating action 57: notify rsc_SAPHana_HDD_HDB00_pre_notify_stop_0 on hana-2 (local)
Sep 16 15:15:53 hana-2 crmd[6008]: notice: process_lrm_event: LRM operation rsc_SAPHana_HDD_HDB00_notify_0 (call=134, rc=0, cib-update=0, confirmed=true) ok
Sep 16 15:15:53 hana-2 crmd[6008]: notice: te_rsc_command: Initiating action 50: notify rsc_SAPHana_HDD_HDB00_post_notify_stop_0 on hana-2 (local)
Sep 16 15:15:53 hana-2 crmd[6008]: notice: process_lrm_event: LRM operation rsc_SAPHana_HDD_HDB00_notify_0 (call=135, rc=0, cib-update=0, confirmed=true) ok
Sep 16 15:15:53 hana-2 IPaddr2(rsc_ip_HDD_HDB00)[21300]: INFO: Adding inet address 192.168.129.10/24 with broadcast address 192.168.129.255 to device eth0
Sep 16 15:15:53 hana-2 IPaddr2(rsc_ip_HDD_HDB00)[21300]: INFO: Bringing device eth0 up
Sep 16 15:15:53 hana-2 IPaddr2(rsc_ip_HDD_HDB00)[21300]: INFO: /usr/lib64/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.129.10 eth0 192.168.129.10 auto not_used not_used
Sep 16 15:15:54 hana-2 crmd[6008]: notice: process_lrm_event: LRM operation rsc_ip_HDD_HDB00_start_0 (call=132, rc=0, cib-update=3270, confirmed=true) ok
Sep 16 15:15:54 hana-2 crmd[6008]: notice: run_graph: Transition 2502 (Complete=24, Pending=0, Fired=0, Skipped=4, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-warn-18.bz2): Stopped
Sep 16 15:15:54 hana-2 pengine[6007]: notice: unpack_config: On loss of CCM Quorum: Ignore
Sep 16 15:15:54 hana-2 pengine[6007]: notice: process_pe_message: Calculated Transition 2503: /var/lib/pacemaker/pengine/pe-input-1966.bz2
Sep 16 15:15:54 hana-2 crmd[6008]: notice: do_te_invoke: Processing graph 2503 (ref=pe_calc-dc-1442409354-2824) derived from /var/lib/pacemaker/pengine/pe-input-1966.bz2
Sep 16 15:15:54 hana-2 crmd[6008]: notice: te_rsc_command: Initiating action 34: monitor rsc_ip_HDD_HDB00_monitor_10000 on hana-2 (local)
Sep 16 15:15:54 hana-2 crmd[6008]: notice: do_te_invoke: Processing graph 2503 (ref=pe_calc-dc-1442409354-2824) derived from /var/lib/pacemaker/pengine/pe-input-1966.bz2
Sep 16 15:15:54 hana-2 crmd[6008]: notice: te_rsc_command: Initiating action 34: monitor rsc_ip_HDD_HDB00_monitor_10000 on hana-2 (local)
Sep 16 15:15:54 hana-2 crmd[6008]: notice: process_lrm_event: LRM operation rsc_ip_HDD_HDB00_monitor_10000 (call=136, rc=0, cib-update=3272, confirmed=false) ok
Sep 16 15:15:54 hana-2 crmd[6008]: notice: run_graph: Transition 2503 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1966.bz2): Complete
Sep 16 15:15:54 hana-2 crmd[6008]: notice: do_state_transition: State transition S_TRANSITION_ENGINE → S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Sep 16 15:15:55 hana-2 crmd[6008]: notice: handle_request: Current ping state: S_IDLE
Sep 16 15:15:56 hana-2 crmd[6008]: notice: handle_request: Current ping state: S_IDLE
Sep 16 15:15:56 hana-2 su: (to hddadm) root on none
Sep 16 15:15:59 hana-2 su: (to hddadm) root on none
Sep 16 15:16:00 hana-2 SAPHana(rsc_SAPHana_HDD_HDB00)[21131]: INFO: RA ==== end action monitor_clone with rc=0 (0.149.4) (7s)====
Sep 16 15:16:00 hana-2 lrmd[6005]: notice: operation_finished: rsc_SAPHana_HDD_HDB00_monitor_61000:21131:stderr [ Error performing operation: No such device or address ]
Sep 16 15:16:00 hana-2 lrmd[6005]: notice: operation_finished: rsc_SAPHana_HDD_HDB00_monitor_61000:21131:stderr [ Could not map name=lpa_hdd_lpt to a UUID ]
Sep 16 15:16:00 hana-2 lrmd[6005]: notice: operation_finished: rsc_SAPHana_HDD_HDB00_monitor_61000:21131:stderr [ Error performing operation: No such device or address ]
Sep 16 15:16:02 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[21523]: INFO: DEC: site=secondary, mode=sync, MAPPING=hana-1, hanaRemoteHost=
Sep 16 15:16:02 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[21523]: INFO: RA ==== begin action monitor_clone ( 0.149.3) ====
Sep 16 15:16:02 hana-2 su: (to hddadm) root on none
Sep 16 15:16:06 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[21523]: INFO: RA ==== end action monitor_clone with rc=0 ( 0.149.3) (7s)====
Sep 16 15:16:16 hana-2 su: (to hddadm) root on none
Sep 16 15:16:18 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[21926]: INFO: DEC: site=secondary, mode=sync, MAPPING=hana-1, hanaRemoteHost=
Sep 16 15:16:18 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[21926]: INFO: RA ==== begin action monitor_clone ( 0.149.3) ====
Sep 16 15:16:18 hana-2 su: (to hddadm) root on none
Sep 16 15:16:22 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[21926]: INFO: RA ==== end action monitor_clone with rc=0 ( 0.149.3) (6s)====
Sep 16 15:16:32 hana-2 su: (to hddadm) root on none
Sep 16 15:16:34 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[22245]: INFO: DEC: site=secondary, mode=sync, MAPPING=hana-1, hanaRemoteHost=
Sep 16 15:16:34 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[22245]: INFO: RA ==== begin action monitor_clone ( 0.149.3) ====
Sep 16 15:16:34 hana-2 su: (to hddadm) root on none
Sep 16 15:16:37 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[22245]: INFO: RA ==== end action monitor_clone with rc=0 ( 0.149.3) (5s)====
Sep 16 15:16:48 hana-2 su: (to hddadm) root on none
Sep 16 15:16:50 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[22600]: INFO: DEC: site=secondary, mode=sync, MAPPING=hana-1, hanaRemoteHost=
Sep 16 15:16:50 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[22600]: INFO: RA ==== begin action monitor_clone ( 0.149.3) ====
Sep 16 15:16:50 hana-2 su: (to hddadm) root on none
Sep 16 15:16:53 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[22600]: INFO: RA ==== end action monitor_clone with rc=0 ( 0.149.3) (6s)====
Sep 16 15:17:01 hana-2 SAPHana(rsc_SAPHana_HDD_HDB00)[22910]: INFO: RA ==== begin action monitor_clone (0.149.4) ====
Sep 16 15:17:02 hana-2 su: (to hddadm) root on none
Sep 16 15:17:04 hana-2 su: (to hddadm) root on none
Sep 16 15:17:04 hana-2 su: (to hddadm) root on none
Sep 16 15:17:08 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[23078]: INFO: DEC: site=secondary, mode=sync, MAPPING=hana-1, hanaRemoteHost=
Sep 16 15:17:08 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[23078]: INFO: RA ==== begin action monitor_clone ( 0.149.3) ====
Sep 16 15:17:08 hana-2 su: (to hddadm) root on none
Sep 16 15:17:11 hana-2 SAPHana(rsc_SAPHana_HDD_HDB00)[22910]: INFO: RA ==== end action monitor_clone with rc=0 (0.149.4) (10s)====
Sep 16 15:17:11 hana-2 lrmd[6005]: notice: operation_finished: rsc_SAPHana_HDD_HDB00_monitor_61000:22910:stderr [ Error performing operation: No such device or address ]
Sep 16 15:17:11 hana-2 lrmd[6005]: notice: operation_finished: rsc_SAPHana_HDD_HDB00_monitor_61000:22910:stderr [ Could not map name=lpa_hdd_lpt to a UUID ]
Sep 16 15:17:11 hana-2 lrmd[6005]: notice: operation_finished: rsc_SAPHana_HDD_HDB00_monitor_61000:22910:stderr [ Error performing operation: No such device or address ]
Sep 16 15:17:13 hana-2 SAPHanaTopology(rsc_SAPHanaTopology_HDD_HDB00)[23078]: INFO: RA ==== end action monitor_clone with rc=0 ( 0.149.3) (10s)====

corraudero,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

Has your issue been resolved? If not, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.

Good luck!

Your SUSE Forums Team
http://forums.suse.com