cluster openais constraints problem

I have two virtual machines that run SLES 11 sp2 and there are in cluster mode.
For the clustering I have the pacemaker-openais. I have two applications in the cluster that I want each one to run ONLY in one of the two nodes. For example app01 —> node1 and app02 —> node2. If one node fail (reboot etc.) the application stop to work and no have to migrate to the other node.

My problem is that when I rebooted one of the nodes and that node join the cluster, the application that runs in the node that works properly try to start in the node that joined the cluster and then I have multiactive so the application restarts.

Here is my crm configuration:

node node1 \
attributes standby=“on”
node node2 \
attributes standby=“on”
primitive apache2 lsb:apache2 \
op monitor interval=“10s” timeout=“60s” on-fail=“standby” start-delay=“0” \
op start interval=“0” timeout=“15s” on-fail=“standby” start-delay=“0” \
op stop interval=“0” timeout=“15s” start-delay=“0” \
meta target-role=“Started” is-managed=“true”
primitive ppa-vip ocf:heartbeat:IPaddr2 \
operations $id=“ppa-vip1-operations” \
op monitor interval=“1” timeout=“20” on-fail=“standby” start-delay=“0” \
params ip=“1.2.3.4” cidr_netmask=“24” \
meta target-role=“Started” is-managed=“true”
primitive stonith_sbd stonith:external/sbd \
meta target-role=“Started” \
operations $id=“stonith_sbd-operations” \
op start interval=“0” timeout=“20” \
params sbd_device="/dev/mapper/node-srv_part1"
primitive app01 lsb:app01 \
op monitor interval=“10s” timeout=“60s” on-fail=“restart” start-delay=“0” \
op start interval=“0” timeout=“600s” start-delay=“0” \
op stop interval=“0” timeout=“60s” start-delay=“0” \
meta target-role=“Started” is-managed=“true” migration-threshold=“3”
primitive app02 lsb:app02 \
op monitor interval=“10s” timeout=“60s” on-fail=“restart” start-delay=“0” \
op start interval=“0” timeout=“600s” start-delay=“0” \
op stop interval=“0” timeout=“60s” start-delay=“0” \
meta target-role=“Started” is-managed=“true” migration-threshold=“3”
group apache ppa-vip apache2 \
meta target-role=“Started” is-managed=“true”
location app1 app01 -inf: node2
location app2 app02 -inf: node1
property $id=“cib-bootstrap-options” \
dc-version=“1.1.6-b988976485d15cb702c9307df55512d323831a5e” \
cluster-infrastructure=“openais” \
expected-quorum-votes=“2” \
no-quorum-policy=“ignore” \
stonith-timeout=“90s” \
default-resource-stickiness=“1000000” \
last-lrm-refresh=“1411371948”

Any suggestions about my problem??

Best regards,
Spyros

Does your service (Apache’s httpd it seems) also get started automatically
by the OS? The service should ONLY be started by the cluster software, so
regular /etc/init.d/ scripts should not be set to start at boot time.


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

Thanks for your reply.

All the scripts start from cluster and not at boot time. I have disabled them with the command: chkconfig service_name off.
The problem is with primitive app01 and primitive app02.
All the others works properly.