Best SLES-11 partitioning recommendations, LVM

Hello,

Original documentation is not available and I am trying to reproduce SLES11/Xen installation on a new ProLiant BL460c Gen8 blade. The idea is to have SLES11 XenHost for 8-VMs (838GB hard drive). The existing partitioning scheme is to have 512MB for /boot, 32GB for dom0-VG (LV: /ROOT 12 GB, LV: /SWAP 4GB, LV: /VAR 6GB), and the rest domU-VG (8x100GB for 8VMs). So I wonder whether someone came up with better partitioning schemes, for example creating another 2GB SWAP partition on disk, not only SWAP-LV in dom0-VG. May be this has already been discuss in forum, I am just not aware of this - your recommendations much appreciated.

thanks, kestas

Hi Kestas,

while I would never call this the “best” partitioning, we’re running our Dom0s with the following layout:

dom0-01:~ # df -h|egrep -e "system|boot" Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-root 1008M 430M 528M 45% / /dev/md0 92M 79M 7.6M 92% /boot /dev/mapper/system-opt 1008M 34M 924M 4% /opt /dev/mapper/system-tmp 1008M 55M 903M 6% /tmp /dev/mapper/system-usr 3.0G 1.4G 1.5G 49% /usr /dev/mapper/system-var 2.0G 1.2G 778M 60% /var /dev/mapper/system-log 2.0G 121M 1.8G 7% /var/log dom0-01:~ #

Our DomU disks are on SAN LUNs, so you’ll have to add room for that, too.

We’re using software RAID for the system disks, so /boot is on MD - and I’d make that a bit bigger than listed above, 250 MB would be a better number. OTOH, I see no need to give 18 GB to Dom0: As you can see from above numbers, we’ve got plenty of space (running SLES11SP2 plus HAE on those Dom0s).

And I strongly advise to put /var/log and /tmp in a separate LV, you wouldn’t want growing log files to disturb Dom0.

Just for reference: we’re running 15 to 20 DomUs on such a host, without running out of disk space.

Regards,
Jens

Hi Jens,

Danke vielmals! Great, thank you for your time and response, it is much appreciated. It is good to know what other people are using in production environments. It saves us a lot of time making adjustments/tests, again many thanks!

Best regards,
Kestas

[QUOTE=jmozdzen;13336]Hi Kestas,

while I would never call this the “best” partitioning, we’re running our Dom0s with the following layout:

dom0-01:~ # df -h|egrep -e "system|boot" Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-root 1008M 430M 528M 45% / /dev/md0 92M 79M 7.6M 92% /boot /dev/mapper/system-opt 1008M 34M 924M 4% /opt /dev/mapper/system-tmp 1008M 55M 903M 6% /tmp /dev/mapper/system-usr 3.0G 1.4G 1.5G 49% /usr /dev/mapper/system-var 2.0G 1.2G 778M 60% /var /dev/mapper/system-log 2.0G 121M 1.8G 7% /var/log dom0-01:~ #

Our DomU disks are on SAN LUNs, so you’ll have to add room for that, too.

We’re using software RAID for the system disks, so /boot is on MD - and I’d make that a bit bigger than listed above, 250 MB would be a better number. OTOH, I see no need to give 18 GB to Dom0: As you can see from above numbers, we’ve got plenty of space (running SLES11SP2 plus HAE on those Dom0s).

And I strongly advise to put /var/log and /tmp in a separate LV, you wouldn’t want growing log files to disturb Dom0.

Just for reference: we’re running 15 to 20 DomUs on such a host, without running out of disk space.

Regards,
Jens[/QUOTE]

Hi Jens,

Danke vielmals! Great, thank you for your time and response, it is much appreciated. It is good to know what other people are using in production environments. It saves us a lot of time making adjustments/tests, again many thanks!

Best regards,
Kestas

[QUOTE=jmozdzen;13336]Hi Kestas,

while I would never call this the “best” partitioning, we’re running our Dom0s with the following layout:

dom0-01:~ # df -h|egrep -e "system|boot" Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-root 1008M 430M 528M 45% / /dev/md0 92M 79M 7.6M 92% /boot /dev/mapper/system-opt 1008M 34M 924M 4% /opt /dev/mapper/system-tmp 1008M 55M 903M 6% /tmp /dev/mapper/system-usr 3.0G 1.4G 1.5G 49% /usr /dev/mapper/system-var 2.0G 1.2G 778M 60% /var /dev/mapper/system-log 2.0G 121M 1.8G 7% /var/log dom0-01:~ #

Our DomU disks are on SAN LUNs, so you’ll have to add room for that, too.

We’re using software RAID for the system disks, so /boot is on MD - and I’d make that a bit bigger than listed above, 250 MB would be a better number. OTOH, I see no need to give 18 GB to Dom0: As you can see from above numbers, we’ve got plenty of space (running SLES11SP2 plus HAE on those Dom0s).

And I strongly advise to put /var/log and /tmp in a separate LV, you wouldn’t want growing log files to disturb Dom0.

Just for reference: we’re running 15 to 20 DomUs on such a host, without running out of disk space.

Regards,
Jens[/QUOTE]

Somewhat pressed in time, but I’d like to add to Jens’s reply (as there are 101 right ways to go about this depending on which hardware and tools you are wanting to use/utilize)… the way we set it up is as follows:

For the SLES OS itself:

(/dev/sda)
/boot 500 MB ext2 /dev/sda1 primary partition (no LVM)
/ 50 GB ext3 /dev/sda2 primary partition (no LVM)
swap 3 GB swap /dev/sda3 primary partition (no LVM)
/xentmpl 80 GB (or as much room as is left on the primary server disk) ext3 /dev/sda4 primary partition (no LVM)

Then the storage for the domU/VM OS system disks are held on LVM partitions, connected to the Xen hosts using iSCSI, for example

/dev/VGOS01 is 800 GB in size (from 1 dedicated LUN/volume offered on the iSCSI SAN) & holds
/dev/VGOS01/WindowsVM0-disk0 (40 GB LVM volume, no partition, Linux formating or mount - it’s handed as raw physical disk to domU WindowsVM0)
/dev/VGOS01/WindowsVM1-disk0 (40 GB LVM volume, no partition, Linux formating or mount - it’s handed as raw physical disk to domU WindowsVM1)
…etc

Then the data disks are connected to the Xen hosts via iSCSI, no LVM config on them but again without the Xen host mounting, formatting or locking the disk… just passing on the LUN/volume as physical device to the intended domU.

When clustering over multiple Xen hosts, we make use of Xen locking on shared NFS shares that all nodes of the Xen cluster have access to. This is important to make sure each domU can only be started once (or bad things will happen), mainly a concern when not using cluster aware software (e.g. LVM vs CLVM).

I’ve been doing this for the last three years with very good/great results. Good performance and stability.

Added bonus is in having the OS disks for the domU’s (not the data disks though) within the Xen host’s LVM volume group. That way you can create snapshots of the volume and dump them or do things like rolling back the domU’s OS disk to a version that had a snapshot activated before an update or other.

This does require some insight on what can happen if not done correctly… so test things well and I’d recommend to keep that in mind when choosing a setup.

As an added PS… If you have a SAN in the mix, having the OS disks on a separate disk from your data, or grouped within an LVM VG as I described above… You can use an extra Linux host to create snapshots of those disks at SAN level and utilize that to dump the OS disks as extra DR measure. Having it at SAN level results in good speed and option to create consistent (time-wise and in the domU’s relation to each other) snapshots/dumps of the OS disks.

Cheers,
Willem

Willem,

Great, thank you for your valuable input and time. A lot to consider. Although environments are different but it is good to know how people are setting up and already running their complex-environments.
Many thanks,
Kestas