cluster with two nodes - clvm necessary ?

Hi,

i’d like to create a two node cluster with SLES 11 HAE. My ressources
are virtual machines created with KVM. I’d like to run the VM’s on plain
partitions without any filesystem. I’ve read that this is quicker than
running the vm’s in a partition with a fs. The partition i want to use
resides on a SAN connected via FC. The partition the vm is installed
should be a lv, because this is easy resizeable. Do i need clvm for that
scenario ?

Thanks for any advice.

Bernd


berndgsflinux

berndgsflinux’s Profile: http://forums.novell.com/member.php?userid=5742
View this thread: http://forums.novell.com/showthread.php?t=448192

berndgsflinux;2153644 Wrote:[color=blue]

Hi,

i’d like to create a two node cluster with SLES 11 HAE. My ressources
are virtual machines created with KVM. I’d like to run the VM’s on plain
partitions without any filesystem. I’ve read that this is quicker than
running the vm’s in a partition with a fs. The partition i want to use
resides on a SAN connected via FC. The partition the vm is installed
should be a lv, because this is easy resizeable. Do i need clvm for that
scenario ?

Thanks for any advice.

Bernd[/color]

Hi Bernd,

Going for physical storage for your VM’s (vs file based) is a good
practice performance wise.

What is very important when you are working with clustering virtual
servers, is that you have some type of locking/blocking mechanism in
place so to be sure one VM can not get started multiple times on
different hosts. As your VM’s OS is not designed for that… the
result will usually not be pretty (disk corruption on the VM’s connected
disk as multiple active instances of a VM are writing to the same
disks).

To make sure this will not happen you will either need CLVM in place,
something like STONITH (shoot the other node in the head) or some other
locking mechanism.

So yes, I’d say use CLVM and make sure it works in your setup with the
consideration in mind locking is working as it should between hosts.

I’ve been using Xen’s own locking mechanism for a while, together with
plain old LVM (which means I don’t need the HAE pack to get a
redundant/protected setup), so other than some limited testing that I’ve
done I can’t give you more details on what to look out for then using
CVLM.

Others might chime in for that :slight_smile:

-Willem


Novell Knowledge Partner (voluntary sysop)

It ain’t anything like Harry Potter… but you gotta love the magic IT
can bring to this world

magic31’s Profile: http://forums.novell.com/member.php?userid=2303
View this thread: http://forums.novell.com/showthread.php?t=448192

Hi Bernd,

is that a single partition on the SAN for all VMs or a LUN per VM?

We decided for the latter (using NPIV to dynamically attach the LUNs to
each cluster node), using Xen’s locking feature to avoid starting VMs
twice. Using independant and dynamically (re-)attachable LUNs makes
resizing even easier, using NPIV ups the requirements towards your
infrastructure (min 4Gbps FC, FC switch between cluster nodes and SAN
storage,…).

When you only have a single LUN for all VM LVs I’d opt for cLVM to be
on the safe side (oh these acronyms :D)… haven’t played with it yet,
though. Does it support dynamic LV resize across active nodes? I’d put
that through a thorough test before going productive.

Regards,
Jens


from the times when today’s “old school” was “new school” :eek:

jmozdzen’s Profile: http://forums.novell.com/member.php?userid=32246
View this thread: http://forums.novell.com/showthread.php?t=448192

[color=blue]

Hi Bernd,[/color]
[color=blue]
is that a single partition on the SAN for all VMs or a LUN per VM?[/color]

I’m not familiar with SAN’s. This is the first time i use one. What is
the difference ?
My current idea is to create a “partition/disk” in my SAN, use it as a
PV in my host, create a vg on top of it and finally several lv’s (one
per vm).
[color=blue]

We decided for the latter (using NPIV to dynamically attach the LUNs[/color]
to each cluster node), using Xen’s locking feature to avoid starting[color=blue]
VMs twice. Using independant and dynamically (re-)attachable LUNs makes[/color]
resizing even easier, using NPIV ups the requirements towards >your
infrastructure (min 4Gbps FC, FC switch between cluster nodes and SAN
storage,…).

What is NPIV ? Is this a linux feature ? My infrastructure are two
hosts, 8Gb FC, and a SAN with two controllers, each one has two FC
connectors. So i don’t need a FC switch.
[color=blue]

When you only have a single LUN for all VM LVs I’d opt for cLVM to be[/color]
on the safe side (oh these acronyms :D)… haven’t played with it >yet,
though. Does it support dynamic LV resize across active nodes? I’d put
that through a thorough test before going productive.

I hope it supports dynamic resize, because this is one of the
advantages of LVM. I thought cLVM is just an extension, but all LVM
tools stay available. I will test before going productive.

Bernd

Regards,
Jens


berndgsflinux

berndgsflinux’s Profile: http://forums.novell.com/member.php?userid=5742
View this thread: http://forums.novell.com/showthread.php?t=448192

Hi Bernd,
[color=blue][color=green]

is that a single partition on the SAN for all VMs or a LUN per VM?[/color]
[…] What is the difference ?[/color]

a “LUN” is, basically, the virtual disk presented to the initiator. In
FC, you always have the “client” (called initiator), with a set of
virtual disks defined to it. It’s like having a SCSI adapter with real
disks…

You can typically (re-)configure LUNs without changing the other LUNs,
like swapping individual disks behind a SCSI controller. Thus, when you
have a LUN per VM, changing disk requirements for that VM have less
impact on the other VMs. (OK, this is only true if you can both change
the LUN dynamically and having an inititator that will recognize that
change without a reboot!)
[color=blue]

What is NPIV?[/color]

N-PortID virtualization is a feature of FC that’s in the standards for
4Gbps-FC upwards. You can think of it as having virtual FC HBAs within
your initiator, on top of the physical HBA(s). From the SANs point of
view, there is not (much) difference to a physical HBA - from the
admin’s point of view, you gain flexibility. NPIV HBAs have their own
WWPN…

What we did is:

  • define individual groups per VM in the SAN server, and grant a single
    (NPIV) HBA access to each group
  • before starting the VM, the host OS creates a new NPIV adapter,
    corresponding to the VM.
  • when the VM is stopped, so is the NPIV HBA

That way, the VM’s resources are only available on the server the VM
runs on and only while the VM is active. When virtualization technology
gets more mature, we hope to have a way to create the NPIV adapter
inside the VM, so that the base OS (Dom0 in Xen terms) will not even
see the VMs’ disks.

We can change the SAN definitions while the corresponding VM is down -
on VM start, it will automatically pick up the changed resources.

Regards,
Jens

PS: Sorry for the late reply, I have been out of the office
unexpectedly.


from the times when today’s “old school” was “new school” :eek:

jmozdzen’s Profile: http://forums.novell.com/member.php?userid=32246
View this thread: http://forums.novell.com/showthread.php?t=448192

[color=blue]

a “LUN” is, basically, the virtual disk presented to the initiator. In[/color]
FC, you always have the “client” (called initiator), with a set of[color=blue]
virtual disks defined to it. It’s like having a SCSI adapter with real[/color]
disks…
[color=blue]
You can typically (re-)configure LUNs without changing the other LUNs,[/color]
like swapping individual disks behind a SCSI controller. Thus, when >you
have a LUN per VM, changing disk requirements for that VM have less
impact on the other VMs. (OK, this is only true if you can both >change
the LUN dynamically and having an inititator that will recognize that
change without a reboot!)
[color=blue][color=green]

What is NPIV?[/color][/color]
[color=blue]
N-PortID virtualization is a feature of FC that’s in the standards for[/color]
4Gbps-FC upwards. You can think of it as having virtual FC HBAs within[color=blue]
your initiator, on top of the physical HBA(s). From the SANs point of[/color]
view, there is not (much) difference to a physical HBA - from the[color=blue]
admin’s point of view, you gain flexibility. NPIV HBAs have their own[/color]
WWPN…

Do i need a switch to implement NPIV ? For me, it seems like this.
Does my VM need to support NPIV (i’d like to use KVM) ?

Bernd


berndgsflinux

berndgsflinux’s Profile: http://forums.novell.com/member.php?userid=5742
View this thread: http://forums.novell.com/showthread.php?t=448192

Hi Bernd,

there is some special FC loop mode that ought to support NPIV, but I’m
not sure what it is exactly - since our FC SAN server didn’t support
that anyhow, I went for the switch :-/

Whether your VM needs to support NPIV or not, depends… for instance,
Xen has special support to dynamically create NPIV-based devices on Dom0
upon start of the VM, so the VM may access the device like any other
Dom0 block device. I could do without, by creating it manually before
starting the VM and destroying the NPIV port after having stoppend the
VM. OTOH, there’s no fully functional and similarily automated support
to create the FC HBA inside the VM, passing through the information to
the Dom0 driver.

I have no experience with KVM, but found
https://www.redhat.com/archives/libvirt-users/2011-July/msg00026.html
and others via Internet search.

Regards
Jens


from the times when today’s “old school” was “new school” :eek:

jmozdzen’s Profile: http://forums.novell.com/member.php?userid=32246
View this thread: http://forums.novell.com/showthread.php?t=448192