Issue with LVM on SLES 12

I am having an issue with LVM on SLES 12. My environment is SLES 12 running on System z but I think that this could be affecting all SLES 12 environments.

My issue is that I am not able to have a Logical Volume automatically mounted after a reboot. This is a test system so I used the CLI to create the physical volume, the volume group, and the Logical Volume and then created an ext3 filesystem on it. I can mount the LV but after a reboot the LV is marked as ‘NOT available’ (from the lvdisplay command). If I issue ‘vgchange -ay’ the LV status changes to ‘available’ but upon reboot it is ‘NOT available’. This is a major problem if I update /etc/fstab to mount this LV as the system boots into recovery mode because the LV is not available.

I’ve tried using the systemctl commands to start and enable lvm2-monitor.service and lvm2-activation but nothing seems to enable LVM at boot time (SLES 11 required boot.lvm to be enabled but this doesn’t exist in SLES 12). I tried using YaST | System | Services Manager also and get an error when I click OK to write out the configuration change.

Chapter 4.6 in the SLES 12 Storage Administration Guide apparently was never updated for SLES 12 as it refers to using/modifying files /etc/sysconfig/lvm and etc/rc.d/boot.lvm - neither of these files exist in SLES 12. I searched for files ‘lvm’ and ‘boot.lvm’ with the find command and displayed the files installed with package lvm2-2.02.98-48.8.s390x - they don’t exist.

Google searches haven’t uncovered anything helpful. I believe that I checked the zipl parameters and the grub2 parameters for this but I could be mistaken.

Any ideas as to what I need to do to get LVM to start at boot time?

Harley

Hi Harley,

I’ve just tried to dupe this on a SLES 12 64bit VM (running on VMware)… and I can’t dupe.

I’ve created a new VG on a new disk (that I had first configured with a partition type 8e), added a LVM volume to that formatted as ext3 and added a mount point for it in /etc/fstab.

I did need to “vgchange -ay” after creating the VG, as I was getting a message that the disk label could not be read (did not take note of the exact error), but after doing so I could add the LVM volume, format & mount it.
It also auto mounted when rebooting.

How is your LVM exactly configured? Is on a a second disk, or on the same disk holding the boot/root partition?

Cheers,
Willem

Hi Willem,

My SLES 12 system is installed on one disk (boot/root partition) and the LV is installed on a second disk. The LV was set up to use all of the available space on the second disk.

Output from pvdisplay

--- Physical volume --- PV Name /dev/dasdd1 VG Name sysg1 PV Size 6.88 GiB / not usable 2.41 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1760 Free PE 0 Allocated PE 1760 PV UUID jAzTMb-qLKd-LObN-YRvc-9gfD-IzGC-mW3I3Q

Output from vgdisplay.

--- Volume group --- VG Name sysg1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 6.88 GiB PE Size 4.00 MiB Total PE 1760 Alloc PE / Size 1760 / 6.88 GiB Free PE / Size 0 / 0 VG UUID 9vMHoO-qdeF-WWQM-3vvg-NMCa-wtaq-M2ZjGR

Output from lvdisplay.

--- Logical volume --- LV Path /dev/sysg1/system-lv1 LV Name system-lv1 VG Name sysg1 LV UUID 9ujgs0-DPEQ-LUhz-GMt8-wr1S-2Gj7-ghpeOs LV Write Access read/write LV Creation host, time aclnx-cld1-lnxadmin, 2014-12-15 12:19:20 -0600 LV Status NOT available LV Size 6.88 GiB Current LE 1760 Segments 1 Allocation inherit Read ahead sectors auto

My thinking is that LVM is not enabled at boot time, which is causing the LV Status to be changed to ‘NOT available’ when the system is booted. I can change the LV Status with ‘vgchange -ay’ and can mount the LV just fine. It just isn’t available after a reboot unless I vgchange and mount it.

I would like to open an SR on this but don’t think that I’m entitled to as I am testing on a CEC that isn’t part of my SuSE license. It doesn’t help that SuSE didn’t update the LVM documentation for SLES 12.

Thank you for testing this on another platform. Maybe the issue only occurs on System z as the disk is not on a SAN.

Harley

Possible… but it could also be that “/dev/dasdd1” is not visible when LVM is initialized at boot, resulting in the VG/LV config not getting activated.

A workaround for the moment could be to add a “vgscan && vgchange -ay” statement to /etc/init.d/after.local (that still works with SLE12). That should at least bring the LV into an active state and you can add mount and service start statements to that for anything that will be running on that LV.

I’ll ask my SUSE contacts if this might be a known issue or if they have other suggestions.

Cheers,
Willem

Willem,

I received help from a listserver after I started this thread. I encountered the problem with the SLES 12 install as soon as I installed it. I didn’t install any maintenance as the LPAR running z/VM (Linux runs as a guest under z/VM) doesn’t have access to the internet.

The listserver user was able to recreate my issue but said that the issue was resolved when he installed all known maintenance. I manually downloaded all of the known rpm’s to my PC and uploaded them to SLES 12. I then figured out which rpm’s I needed to upgrade and updated them with ‘rpm -Uvh package-name’ and rebooted.

I issued ‘vgchange -ay’, mounted and accessed the LV. I then rebooted and checked the LV Status with ‘lvdisplay’. It once again showed the LV as ‘NOT available’.

The listserver user then suggested that I run the mkinitrd command and see if it resolved the problem. It did so they suggested that I open an SR with SUSE (I called SUSE and found that I am eligible for support) to report it as a bug.

The technician for the SR thinks that “LVM is scanning for volumes prior to the dasd becoming available, but isn’t rescanned later”. The volume is available at boot time as I see messages on the console as it is being discovered.

My issue has been resolved. Hopefully, this thread will help others if they encounter the problem. It is a pain to manually apply maintenance to a server, once you get used to using YOU or ‘zypper update’. Now that I have nailed the process I won’t make the mistake of NOT installing known maintenance when I install a new release.

Harley

Hello;

Even though I have all the maintenance updates, the LVM would still not be activated after a reboot.

Edited the /etc/lvm/lvm.conf file and changed “use_lvmetad = 0” to “use_lvmetad = 1”

This one change fixed my LVM to be activated during boot/reboot.