Testing virtualization waters. I want to setup a SLES11 SP1 XEN server with a SLES11 SP1 virtual guest that will house a number of databases and other services. Other guests to be installed at a later date. Any advice whether exporting LVMs from Dom0 to the virtual guest is the best way to go for performance and stability or should I configure the LVMs on the guest itself. Any gotchas with either of these scenarios?
In the old environment, the server is a stand-alone SLES11 SP1 with these databases contained in LVMs with no virtualization. The disk is a fibre-channel attached SAN. This SAN will be used in the new setup as well.
Testing virtualization waters. I want to setup a SLES11 SP1 XEN
server with a SLES11 SP1 virtual guest that will house a number of
databases and other services. Other guests to be installed at a
later date. Any advice whether exporting LVMs from Dom0 to the
virtual guest is the best way to go for performance and stability or
should I configure the LVMs on the guest itself. Any gotchas with
either of these scenarios?
In the old environment, the server is a stand-alone SLES11 SP1 with
these databases contained in LVMs with no virtualization. The disk
is a fibre-channel attached SAN. This SAN will be used in the new
setup as well.[/color]
I generally deploy servers with local storage. I create LV’s on Dom0
but do not mount them or format them. I export them to my DomU where
they each appear as separate disk. But your situation is a bit
different.
First of all, I would look at SLES11-SP2 for my Dom0 and use SLES11-SP1
in my DomU only if my applications are not fully supported with
SLES11-SP2.
Block devices provide better performance than file-backed storage.
Since you have a fibre-channel attached SAN, you may want to consider
NPIV:
We’re running our DomUs via NPIV-attached LUNs (each DomU has its individual LUN(s)) with LVM configured inside the DomUs.
Two things to look out for:
To avoid confusion with VGs in Dom0 I recommend to set a proper mask in Dom0’s lvm.conf so that DomU LUNs are not handled by Dom0’s LVM.
A nasty SLES11SP1 bug exists in the handling of dynamically attached disks (in /lib/udev/activate_vg) that may lead to a situation where you no longer can start new DomUs (a Dom0 reboot is required). I don’t know if SP2 has the fix (it was reported to and accepted by devs months ago), I recommend making sure that the lvm vgchange command is only executed if DomU’s vgname is different from all vgnames on Dom0.
Except for those two points I’d opt for LVM inside DomU for flexibility’s sake.