second node rebooted when first node comes back online

HI,

Weird but urgent problem:

We have a two node cluster and when we try to similate a failure by restarting the first node then the second node gets rebooted by the cluster service the moment it comes back online on the first node.

Anyone any idea on where to look for the cause of this?

Sander

Hi Sander,

[QUOTE=japeape;30841]HI,

Weird but urgent problem:

We have a two node cluster and when we try to similate a failure by restarting the first node then the second node gets rebooted by the cluster service the moment it comes back online on the first node.

Anyone any idea on where to look for the cause of this?[/QUOTE]

yes: In syslog - check the pacemaker messages to see who initiated the reboot.

I’ve seen this before, but don’t recall the actual cause - may have been that both nodes decided on fencing the other node and the second action got delayed until quorum was restored…

Regards,
Jens

This is an extract from /var/log/messages

Dec 14 01:27:02 server1 SAPHana(rsc_SAPHana_BRP_HDB00)[8574]: INFO: RA ==== end action monitor_clone with rc=7 (0.149.4) (7s)====
Dec 14 01:27:02 server1 lrmd[8052]: notice: operation_finished: rsc_SAPHana_BRP_HDB00_monitor_0:8574 [ Error performing operation: No such device or address ]
Dec 14 01:27:02 server1 lrmd[8052]: notice: operation_finished: rsc_SAPHana_BRP_HDB00_monitor_0:8574 [ Error performing operation: No such device or address ]
Dec 14 01:27:02 server1 lrmd[8052]: notice: operation_finished: rsc_SAPHana_BRP_HDB00_monitor_0:8574 [ Could not map name=lpa_brp_lpt to a UUID ]
Dec 14 01:27:02 server1 lrmd[8052]: notice: operation_finished: rsc_SAPHana_BRP_HDB00_monitor_0:8574 [ Error performing operation: No such device or address ]
Dec 14 01:27:02 server1 lrmd[8052]: notice: operation_finished: rsc_SAPHana_BRP_HDB00_monitor_0:8574 [ Error performing operation: No such device or address ]
Dec 14 01:27:02 server1 crmd[8055]: notice: process_lrm_event: LRM operation rsc_SAPHana_BRP_HDB00_monitor_0 (call=11, rc=7, cib-update=30, confirmed=true) not running
Dec 14 01:27:02 server1 crmd[8055]: notice: process_lrm_event: server1-rsc_SAPHana_BRP_HDB00_monitor_0:11 [ 30 ]
Dec 14 01:27:02 server1 crmd[8055]: notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on server1 (local) - no waiting
Dec 14 01:27:02 server1 attrd[8053]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Dec 14 01:27:02 server1 attrd[8053]: notice: attrd_perform_update: Sent update 10: probe_complete=true
Dec 14 01:27:05 server1 sbd: [8713]: info: reset successfully delivered to server2
Dec 14 01:27:05 server1 sbd: [8706]: info: Message successfully delivered.
Dec 14 01:27:06 server1 stonith-ng[8051]: notice: log_operation: Operation ‘reboot’ [8684] (call 2 from crmd.8055) for host ‘server2’ with device ‘stonith-sbd’ returned: 0 (OK)
Dec 14 01:27:06 server1 stonith-ng[8051]: notice: remote_op_done: Operation reboot of server2 by server1 for crmd.8055@server1.01fea0a0: OK
Dec 14 01:27:06 server1 crmd[8055]: notice: tengine_stonith_callback: Stonith operation 2/44:0:0:de761f36-d72f-4ac1-aaa1-7c68150141b5: OK (0)
Dec 14 01:27:06 server1 crmd[8055]: notice: crm_update_peer_state: send_stonith_update: Node server2[0] - state is now lost (was (null))
Dec 14 01:27:06 server1 crmd[8055]: notice: tengine_stonith_notify: Peer server2 was terminated (st_notify_fence) by server1 for server1: OK (ref=01fea0a0-ab49-4c5f-a60c-66c9beaf0ce5) by client crmd.8055

The second node seems to get actively killed but I cannot determine why from the logs.

some more info

Dec 15 14:48:35 server1 crmd[53914]: notice: crm_update_peer_state: crm_update_ais_node: Node server1[739574258] - state is now member (was (null))
Dec 15 14:48:35 server1 crmd[53914]: notice: do_started: The local CRM is operational
Dec 15 14:48:36 server1 mgmtd: [53915]: info: Started.
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:48:41 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 1448: memb=1, new=0, lost=0
Dec 15 14:48:41 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: memb: server1 739574258
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:48:41 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:48:41 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 1448: memb=1, new=0, lost=0
Dec 15 14:48:41 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: MEMB: server1 739574258
Dec 15 14:48:41 server1 corosync[53903]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 15 14:48:41 server1 corosync[53903]: [CPG ] chosen downlist: sender r(0) ip(172.21.1.242) ; members(old:1 left:0)
Dec 15 14:48:41 server1 corosync[53903]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:48:47 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 1452: memb=1, new=0, lost=0
Dec 15 14:48:47 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: memb: server1 739574258
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:48:47 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:48:47 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 1452: memb=1, new=0, lost=0
Dec 15 14:48:47 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: MEMB: server1 739574258
Dec 15 14:48:47 server1 corosync[53903]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 15 14:48:47 server1 corosync[53903]: [CPG ] chosen downlist: sender r(0) ip(172.21.1.242) ; members(old:1 left:0)
Dec 15 14:48:47 server1 corosync[53903]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:48:53 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 1456: memb=1, new=0, lost=0
Dec 15 14:48:53 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: memb: server1 739574258
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:48:53 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:48:53 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 1456: memb=1, new=0, lost=0
Dec 15 14:48:53 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: MEMB: server1 739574258
Dec 15 14:48:53 server1 corosync[53903]: [pcmk ] info: update_member: 0x7f38a0005610 Node 739574258 ((null)) born on: 1444
Dec 15 14:48:53 server1 corosync[53903]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 15 14:48:53 server1 corosync[53903]: [CPG ] chosen downlist: sender r(0) ip(172.21.1.242) ; members(old:1 left:0)
Dec 15 14:48:53 server1 corosync[53903]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 15 14:48:56 server1 crmd[53914]: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:49:00 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 1460: memb=1, new=0, lost=0
Dec 15 14:49:00 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: memb: server1 739574258
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] CLM CONFIGURATION CHANGE
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] New Configuration:
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] r(0) ip(172.21.1.242)
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] Members Left:
Dec 15 14:49:00 server1 corosync[53903]: [CLM ] Members Joined:
Dec 15 14:49:00 server1 corosync[53903]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 1460: memb=1, new=0, lost=0
Dec 15 14:49:00 server1 corosync[53903]: [pcmk ] info: pcmk_peer_update: MEMB: server1 739574258
Dec 15 14:49:00 server1 corosync[53903]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Dec 15 14:49:00 server1 corosync[53903]: [CPG ] chosen downlist: sender r(0) ip(172.21.1.242) ; members(old:1 left:0)
Dec 15 14:49:00 server1 corosync[53903]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 15 14:49:00 server1 crmd[53914]: notice: do_state_transition: State transition S_ELECTION → S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Dec 15 14:49:00 server1 crmd[53914]: notice: crm_update_quorum: Updating quorum status to false (call=25)
Dec 15 14:49:00 server1 attrd[53912]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Dec 15 14:49:00 server1 cib[53909]: warning: qb_ipcs_event_sendv: new_event_notification (53909-53932-12): Broken pipe (32)
Dec 15 14:49:00 server1 cib[53909]: warning: cib_notify_send_one: Notification of client monitor/e258d6f7-1147-419a-ad3e-af33666eb644 failed
Dec 15 14:49:00 server1 pengine[53913]: notice: unpack_config: On loss of CCM Quorum: Ignore
Dec 15 14:49:00 server1 pengine[53913]: warning: stage6: Scheduling Node server2 for STONITH
Dec 15 14:49:00 server1 pengine[53913]: notice: LogActions: Start stonith-sbd (server1)
Dec 15 14:49:00 server1 pengine[53913]: notice: LogActions: Start rsc_SAPHana_BRP_HDB00:0 (server1)
Dec 15 14:49:00 server1 pengine[53913]: notice: LogActions: Start rsc_SAPHanaTopology_BRP_HDB00:0 (server1)
Dec 15 14:49:00 server1 pengine[53913]: notice: LogActions: Start rsc_ip_BRP_HDB00 (server1)
Dec 15 14:49:00 server1 pengine[53913]: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-35.bz2
Dec 15 14:49:00 server1 crmd[53914]: notice: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1450183740-7) derived from /var/lib/pacemaker/pengine/pe-warn-35.bz2
Dec 15 14:49:00 server1 crmd[53914]: notice: te_rsc_command: Initiating action 4: monitor stonith-sbd_monitor_0 on server1 (local)
Dec 15 14:49:00 server1 crmd[53914]: notice: te_rsc_command: Initiating action 5: monitor rsc_SAPHana_BRP_HDB00:0_monitor_0 on server1 (local)
Dec 15 14:49:00 server1 crmd[53914]: notice: te_rsc_command: Initiating action 6: monitor rsc_SAPHanaTopology_BRP_HDB00:0_monitor_0 on server1 (local)
Dec 15 14:49:00 server1 crmd[53914]: notice: te_rsc_command: Initiating action 7: monitor rsc_ip_BRP_HDB00_monitor_0 on server1 (local)
Dec 15 14:49:00 server1 crmd[53914]: notice: te_fence_node: Executing reboot fencing operation (28) on server2 (timeout=60000)
Dec 15 14:49:00 server1 stonith-ng[53910]: notice: handle_request: Client crmd.53914.ae60c805 wants to fence (reboot) ‘server2’ with device ‘(any)’
Dec 15 14:49:00 server1 stonith-ng[53910]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for server2: 6a41cda9-f4c8-4607-9453-7f0dfddefc91 (0)
Dec 15 14:49:00 server1 su: (to brpadm) root on none
Dec 15 14:49:00 server1 crmd[53914]: notice: process_lrm_event: LRM operation rsc_ip_BRP_HDB00_monitor_0 (call=20, rc=7, cib-update=32, confirmed=true) not running
Dec 15 14:49:00 server1 SAPHana(rsc_SAPHana_BRP_HDB00)[53933]: INFO: RA ==== begin action monitor_clone (0.149.4) ====
Dec 15 14:49:00 server1 su: (to brpadm) root on none
Dec 15 14:49:01 server1 sbd: [54092]: info: Delivery process handling /dev/disk/by-id/dm-name-360060e802211a800504111a800000c00
Dec 15 14:49:01 server1 sbd: [54092]: info: Device UUID: 0b7dd9c4-0005-4ccf-8603-971703b91d6c
Dec 15 14:49:01 server1 sbd: [54092]: info: Writing reset to node slot server2
Dec 15 14:49:01 server1 sbd: [54092]: info: Messaging delay: 10
Dec 15 14:49:01 server1 crmd[53914]: notice: handle_request: Current ping state: S_TRANSITION_ENGINE
Dec 15 14:49:02 server1 SAPHanaTopology(rsc_SAPHanaTopology_BRP_HDB00)[53934]: INFO: DEC: site=BRPPRI, mode=sync, MAPPING=server2, hanaRemoteHost=
Dec 15 14:49:02 server1 SAPHanaTopology(rsc_SAPHanaTopology_BRP_HDB00)[53934]: INFO: RA ==== begin action monitor_clone ( 0.149.3) ====
Dec 15 14:49:02 server1 attrd[53912]: notice: attrd_trigger_update: Sending flush op to all hosts for: hana_brp_clone_state (DEMOTED)
Dec 15 14:49:02 server1 attrd[53912]: notice: attrd_perform_update: Sent update 4: hana_brp_clone_state=DEMOTED
Dec 15 14:49:02 server1 su: (to brpadm) root on none
Dec 15 14:49:02 server1 su: (to brpadm) root on none

Hi Sander,

the excerpt only covers a short time window when the sbd was delivered - check much earlier entries, around when the other node was rebooted as well. I’d expect the reboot decision to have been made then.

Regards,
Jens

Hi Sander,

is the above excerpt from after a server1 reboot (or restart of cluster services)? I see that it starts with server1 becoming a member. Considering your initial message, I’ll assume that server1 got fenced and then you observed above messages during its start-up phase, when server2 got fenced.

The ring status updates look fishy: It looks as if only one node is available (server1?, IP 172.21.1.242), likely that’s why it’s rebooting server2 to reach a clean state.

Quorum is not involved, I see that you have set the according option to ignore missing quorum.

Regards,
Jens

I think this is the problem. Can a corrupt /var/lib/pacemaker/cib/cib.xml cause this behaviour?

Dec 15 14:48:34 hnbrpdb1 mgmtd: [53915]: info: Pacemaker-mgmt Git Version:
Dec 15 14:48:34 hnbrpdb1 mgmtd: [53915]: WARN: Core dumps could be lost if multiple dumps occur.
Dec 15 14:48:34 hnbrpdb1 mgmtd: [53915]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
Dec 15 14:48:34 hnbrpdb1 mgmtd: [53915]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
Dec 15 14:48:34 hnbrpdb1 mgmtd: [53915]: notice: connect to lrmd: 0, rc=-107
Dec 15 14:48:34 hnbrpdb1 corosync[53903]: [pcmk ] info: pcmk_ipc: Recorded connection 0x6a1170 for stonith-ng/53910
Dec 15 14:48:34 hnbrpdb1 cib[53909]: error: validate_cib_digest: Digest comparision failed: expected b37e2a4901e0e5b4b2b357199d9b8f8b (/var/lib/pacemaker/cib/cib.xml.sig), calculated 0a716e016aebb4e95e59e9d75dcdeccb
Dec 15 14:48:34 hnbrpdb1 cib[53909]: error: retrieveCib: Checksum of /var/lib/pacemaker/cib/cib.xml failed! Configuration contents ignored!
Dec 15 14:48:34 hnbrpdb1 cib[53909]: error: retrieveCib: Usually this is caused by manual changes, please refer to http://clusterlabs.org/wiki/FAQ#cib_changes_detected
Dec 15 14:48:34 hnbrpdb1 cib[53909]: warning: retrieveCib: Continuing but /var/lib/pacemaker/cib/cib.xml will NOT used.
Dec 15 14:48:34 hnbrpdb1 cib[53909]: error: cib_rename: Archiving corrupt or unusable file /var/lib/pacemaker/cib/cib.xml as /var/lib/pacemaker/cib/cib.auto.z2LFxS
Dec 15 14:48:34 hnbrpdb1 cib[53909]: error: cib_rename: Archiving corrupt or unusable file /var/lib/pacemaker/cib/cib.xml.sig as /var/lib/pacemaker/cib/cib.auto.rjWFhF
Dec 15 14:48:34 hnbrpdb1 cib[53909]: warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backup…
Dec 15 14:48:34 hnbrpdb1 cib[53909]: warning: readCibXmlFile: Attempting to load: /var/lib/pacemaker/cib/cib-23.raw
Dec 15 14:48:34 hnbrpdb1 cib[53909]: warning: validate_cib_digest: No on-disk digest present
Dec 15 14:48:34 hnbrpdb1 crmd[53914]: notice: main: CRM Git Version: 2db99f1
Dec 15 14:48:34 hnbrpdb1 corosync[53903]: [pcmk ] info: pcmk_ipc: Recorded connection 0x6a54d0 for attrd/53912
Dec 15 14:48:34 hnbrpdb1 attrd[53912]: notice: main: Starting mainloop…

Hi Sander,

sounds not so likely… if it rejects the CIB, it wouldn’t know about the other node, and hence wouldn’t fence it.

Have you cleared the CIB situation by now, and did that solve the issue?

If not: When server1 comes up, what does server2 recognize? Server1 reports being alone on the ring, does server2 see anything from server1 at that moment? “Famous last words”: “There is no other node that might issue a fence opera… ” :smiley:

Regards,
Jens