Sles 11 sp3

I am looking for how to upgrade from sles11 sp2 to sp3 when you are also
running HA. I have been searching the docs how t do this. I have quite a few
vm’s running using ha and looking for the cleanest way to upgrade. This is a
2 node HA cluster.

Hi
Have you read this one?
https://www.suse.com/documentation/sle_ha/singlehtml/book_sleha/book_sleha.html#sec.ha.migration.sle11.sp3


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
SLED 11 SP3 (x86_64) GNOME 2.28.0 Kernel 3.0.93-0.8-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Hi Rickb,

[QUOTE=Rickb;17222]I am looking for how to upgrade from sles11 sp2 to sp3 when you are also
running HA. I have been searching the docs how t do this. I have quite a few
vm’s running using ha and looking for the cleanest way to upgrade. This is a
2 node HA cluster.[/QUOTE]

from what I can tell, you need to take down the cluster to upgrade (no rolling update) - but then it’s a simple as described in https://www.suse.com/support/kb/doc.php?id=7012368, i.e. “Update by using zypper”. HAE is a simple add-on product that can be upgraded online easily - so in step 7 / 8, simply include the HAE migration product as well and include the HAE repos in the “zypper dup” step.

Regards,
Jens

On 04/11/2013 17:41, jmozdzen wrote:
[color=blue]

from what I can tell, you need to take down the cluster to upgrade (no
rolling update) - but then it’s a simple as described in
https://www.suse.com/support/kb/doc.php?id=7012368, i.e. “Update by
using zypper”. HAE is a simple add-on product that can be upgraded
online easily - so in step 7 / 8, simply include the HAE migration
product as well and include the HAE repos in the “zypper dup” step.[/color]

Correct. I’ve literally just upgraded a SLES11 SP2 HAE cluster node to
SP3 using the Zypper method as per section 7.6.3 of the SLES11
Deployment Guide[1].

HTH.

[1]
https://www.suse.com/documentation/sles11/book_sle_deployment/data/sec_update_sle11sp2.html

Simon
SUSE Knowledge Partner


If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.

On 4.11.2013 19:41, jmozdzen wrote:
[color=blue]

from what I can tell, you need to take down the cluster to upgrade (no
rolling update) - but then it’s a simple as described in
https://www.suse.com/support/kb/doc.php?id=7012368, i.e. “Update by
using zypper”. HAE is a simple add-on product that can be upgraded[/color]

Have you tried rolling update for cluster? (And failed?) Or where you
have find out that rolling update is not supported?

I’m just planning to update 4 node cluster and I could really use some
real world experience about upgrading. :wink:

My plan per node is :
rcopenais stop
chkconfig openais off
(now run zypper to update server and HA products, reboot as many times
as needed)
chkconfig openais on
rcopenais start

and continue with next node…

Hi paca,

Have you tried rolling update for cluster? (And failed?) Or where you
have find out that rolling update is not supported?

we tried and had trouble getting our OCFS2 file systems to re-join.Might have been something different than the version upgrade, but in our case it was a scheduled maintenance window anyhow, so it was easiest to bring down the cluster.

Regards,
Jens

On 6.11.2013 13:04, jmozdzen wrote:
[color=blue]

we tried and had trouble getting our OCFS2 file systems to re-join.Might
have been something different than the version upgrade, but in our case
it was a scheduled maintenance window anyhow, so it was easiest to bring
down the cluster.

Regards,
Jens

[/color]
I also have had some problems in past with major ocfs2 version upgrades.
But now I’m trying to do rolling update again. So far everything is ok.
First node is updated, ocfs2 is mounted. I can migrate virtual machines
from/to updated node.

Only problem is that when testing ocfs2 performance by copying iso file
inside ocfs2
I got dmesg: “JBD: cp wants too many credits (811648 > 16384)”
This occurs only on ocfs2, not on root filesystem (btrfs) and both
devices are sharing same iscsi adapter. I’m hoping that this is just
minor issue with different ocfs2 versions.

I’ll continue with rolling update and post back if hitting any major issues.

On 11.11.2013 15:09, Petri Asikainen wrote:[color=blue]

On 6.11.2013 13:04, jmozdzen wrote:[/color]
[color=blue]

I’ll continue with rolling update and post back if hitting any major
issues.
[/color]

Updating my SLE11 SP2 HAE cluster to SLES SP3 went well until updating
last node. It was running as CRM DC.
When stopping openais (rcopenais off) on DC node, ocfs2 timed out on
other nodes. At least for minute or two. on that time could not access
ocfs2 filesystems.
I’m not sure if this is related slow election of new DC. But this caused
all xen-domu-resources to marked as failed.
So reboot/fencing of whole cluster. :frowning:

So questions I asked from my self after this:
Is it possible to force pacemaker elect DC to new node before shutting
down current one?
Is there configuration parameter that could help with this issue?

And maybe I should do something else for living? :wink:

Hi paca,

[QUOTE=paca;17426]On 11.11.2013 15:09, Petri Asikainen wrote:[COLOR=blue]

On 6.11.2013 13:04, jmozdzen wrote:[/COLOR]
[COLOR=blue]

I’ll continue with rolling update and post back if hitting any major
issues.
[/COLOR]

Updating my SLE11 SP2 HAE cluster to SLES SP3 went well until updating
last node. It was running as CRM DC.
When stopping openais (rcopenais off) on DC node, ocfs2 timed out on
other nodes. At least for minute or two. on that time could not access
ocfs2 filesystems.[/QUOTE]

sounds familiar to me - it’s probably the DLM unavailable during the election period.

[QUOTE=paca;17426]So questions I asked from my self after this:
Is it possible to force pacemaker elect DC to new node before shutting
down current one?
Is there configuration parameter that could help with this issue?[/QUOTE]

DC election is considered an internal detail of the cluster stack and as far as I can tell, there’s no way to force away the DC from a specific node, but by shutting down the stack on that node.

Hey, if it were easy, we wouldn’t get paid for doing it :wink:

Regards,
Jens

Simon Flood wrote:
[color=blue]

On 04/11/2013 17:41, jmozdzen wrote:
[color=green]

from what I can tell, you need to take down the cluster to upgrade (no
rolling update) - but then it’s a simple as described in
https://www.suse.com/support/kb/doc.php?id=7012368, i.e. “Update by
using zypper”. HAE is a simple add-on product that can be upgraded
online easily - so in step 7 / 8, simply include the HAE migration
product as well and include the HAE repos in the “zypper dup” step.[/color]

Correct. I’ve literally just upgraded a SLES11 SP2 HAE cluster node to
SP3 using the Zypper method as per section 7.6.3 of the SLES11
Deployment Guide[1].

HTH.

[1]
[/color]
https://www.suse.com/documentation/sles11/book_sle_deployment/data/sec_update_sle11sp2.html

Simon, I saw that you said you did a node. Did you finish the rest of the
nodes? The wagon train picked up on the HA and upgraded it as well? How many
nodes in the cluster?

Thanks

Hi rickb,

[QUOTE=Rickb;17687]
Simon, I saw that you said you did a node. Did you finish the rest of the
nodes? The wagon train picked up on the HA and upgraded it as well? How many
nodes in the cluster?

Thanks[/QUOTE]

while I’m not Simon, I can report that wagon didn’t treat cluster nodes in any way special - you’ll have to run the steps on each node individually.

Regards,
Jens

jmozdzen wrote:
[color=blue]

Hi rickb,
Rickb;17687 Wrote:[color=green]

Simon, I saw that you said you did a node. Did you finish the rest of
the
nodes? The wagon train picked up on the HA and upgraded it as well? How
many
nodes in the cluster?

Thanks[/color]

while I’m not Simon, I can report that wagon didn’t treat cluster nodes
in any way special - you’ll have to run the steps on each node
individually.

Regards,
Jens

[/color]

Thank you for all the help. I am working on the second node now and so far
everything has gone smooth except for an issue of 1 vm. It was not coming up
properly. Still not sure what happened. I finally moved it back and forth a
few times and than it finally finished completing the start. Not quite sure
yet what happened but it seems to be coming together.

Again thanks

On 25/11/2013 21:38, Rickb wrote:
[color=blue]

Simon, I saw that you said you did a node. Did you finish the rest of the
nodes? The wagon train picked up on the HA and upgraded it as well? How many
nodes in the cluster?[/color]

In our case we had installed two SLES11 SP3 servers to add to an
existing one to create an HAE cluster but after having problems setting
things up discovered the existing server was SLES11 SP2 so I upgraded it
to match.

HTH.

Simon
SUSE Knowledge Partner


If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.