SLES 12 SP1 XEN HA Resource not shutting down guest when shu


I have set up SLES12 SP1 with XEN and the High Availabilty extension pack. Using HAWK 2 and two nodes, I have configured DRBD with dual primary as a HA resource, which starts and promotes the DRBD resource to primary just fine.

I have set up Windows and SLES 11 SP3 guests. I have set up an ‘order’ constraint for the DRBD HA resource and XEN VM resource - DRBD Promotes to Primary, then the VirtualDomain resource for the XEN guest starts the Windows or SLES 11 SP3 guests. I have the ‘Mandatory’ score set for this order constraint. I also have ‘Yes’ set for the Symmetrical option for the order.

The VMs start fine using the order constraint. However, when shutting down or rebooting a host that has one or more of the guests running on it, HA does not automatically stop the VM and demote the DRBD backend it runs from. This causes the VM backend to corrupt. I have expected the order constraint, using the settings mentioned above, to ‘reverse’ its order when the hosts shuts down or reboots - ie. shutdown the guest, then demote the DRBD resource (secondary), however, it does not. This did not seem to be a problem with xend on SLES11 SP2.

Is there something i am missing with LibVirt and the VirtualDomain HA resource? (it was xend and ‘xen’ resource on SLES 11 SP2 - only differences I can see)



It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team

Hi John,

as you’re using a two-node cluster: what’s your “no quorum” policy set to?


Hi J,

I have the ‘no quorum’ policy set to ‘ignore’ for the 2-node cluster.


Hi John,

I have the ‘no quorum’ policy set to ‘ignore’ for the 2-node cluster.

then a cluster freeze because of a lacking quorum isn’t the cause. Well, it was worth asking :wink:

While I know that it’s a lot of messages to wade through, have you tried to analyze Pacemaker’s log messages on the nodes (especially on the remaining node) to see what it’s doing (and probably for indications of what is missing)?

Of course, opening a service request is an option - I found the SUSE HAE engineers to be both very knowledgeable and helpful.