Need help migrating SLES 11 SP3 to a new SAN subsystem

I have a SLES 11 SP3 on System z installation (runs as a virtual server under z/VM 6.3) that was installed onto a LUN on an IBM V7000. Apparently, the SAN box has had some major issues and the SAN team has migrated all of the LUN’s to a replacement box. They have provided me the new WWPN and other info.

I copied the new SAN info into my VM Directory and can start the boot process. I believe that SLES 11 stores this info in the bootloader, and possibly elsewhere, but I can’t seem to find any reference on how to modify the system so that it will totally boot from the new SAN.

This is a test (sandbox) installation that was put together to test that we could actually install Linux to SAN on the mainframe. All of my other Linux instances use tradition mainframe (ECKD) dasd.

What do I need to do to modify (for documentation purposes) the installation to get it to boot?

Regards,
Harley Linker Jr.

x0500hl,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

Has your issue been resolved? If not, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.

Good luck!

Your SUSE Forums Team
http://forums.suse.com

Hi Harley,

[QUOTE=x0500hl;27968]I have a SLES 11 SP3 on System z installation (runs as a virtual server under z/VM 6.3) that was installed onto a LUN on an IBM V7000. Apparently, the SAN box has had some major issues and the SAN team has migrated all of the LUN’s to a replacement box. They have provided me the new WWPN and other info.

I copied the new SAN info into my VM Directory and can start the boot process. I believe that SLES 11 stores this info in the bootloader, and possibly elsewhere, but I can’t seem to find any reference on how to modify the system so that it will totally boot from the new SAN.

This is a test (sandbox) installation that was put together to test that we could actually install Linux to SAN on the mainframe. All of my other Linux instances use tradition mainframe (ECKD) dasd.

What do I need to do to modify (for documentation purposes) the installation to get it to boot?[/QUOTE]

since no-one else picked this up, you’ll have to cope with me :wink: I know SANs, I know SLES11SP3, I’ve used that combination for years… but I have no practical System Z experience, only x86. So much for the disclaimer.

I would have assumed that your virtualization layer will take care of mapping SAN LUNs to your VM, unless you have your SLES configured to access SAN resources directly (which, at least when virtualizing on x86, is rather unlikely).

You have not stated that you have any actual problems - have you already tested this out (“I copied the new SAN info into my VM Directory and can start the boot process.”) and failed? If so, maybe the reason for this lies in the actual file system configuration: On x86, I can decide between several ways to mount a file system. Per its device node, per UUID, per label are the most commonly used. Maybe the mapping used in your SLES VM needs to be updated to references to the new disk? This depends on how your virtualization layer reacts to the changed SAN configuration.

And although you wrote “the SAN team has migrated all of the LUN’s to a replacement box”, depending on the nature of the error you see, maybe the migration didn’t work out as expected and wrecked your LUN? That’s something I wouldn’t put my bet on, but if everything else looks right and things still fail, I would check that out as well.

Regards,
Jens

Jens,

I was able to resolve the problem with help from a list-server, Linux on 390 Port LINUX-390@VM.MARIST.EDU. The issue was that zipl.conf was pointing at the old LUN in the resume= parameter in zipl.conf. I couldn’t find zipl.conf when I booted from the actual LUN (the boot process dies in the initrd section) nor when I tried to perform a recovery from the installation media (I tried to do a recovery install but the process did not totally update zipl.conf).

I basically used the following (provided by a user on the aforementioned list-server):

  1. Use dasd_configure to bring your root file system online, and any other DASD devices you have. (Everything after this (except the device name, would be the same for a root file system on SCSI disk.) <== Did not do this as my boot disk is not ECKD but is SAN. I created the /mnt directory and issued ‘mount /dev/dm-2 /mnt’. zipl.conf was found in /mnt/etc/zipl.conf.
  2. Mount your root file system on /mnt: mount /dev/dasd?# /mnt
  3. Bind-mount /proc, /dev/, and /sys on /mnt:
    for fs in dev proc sys; do mount --bind /$fs /mnt/$fs; done
    4a. chroot /mnt
    4b. If you have /usr on a separate file system, then “mount /usr”. (/usr was on the same filesystem so I didn’t mount it.)
  4. Edit /etc/zipl.conf. You’ll most likely want to use ed or sed. First, test to make sure:
    cd /etc/
    sed -e ‘s/resume=/noresume /’ zipl.conf
    Make sure things look right. You’ll see a “dangling” /dev/disk/by-*whatever but that’s OK for now.
    sed -i.backup -e ‘s/resume=/noresume /’ zipl.conf
  5. Re-run zipl: zipl
  6. Unmount /usr: umount /usr
  7. Exit the chroot: exit
  8. Unmount the various bind mounts:
    for fs in dev proc sys; do umount /mnt/$fs; done
  9. Unmount your root file system: umount /mnt
  10. Try rebooting.

Once I was able to boot the system I edited zipl.conf to replace all of the hardware identifiers (?) with the one I found in /dev/mapper anad changed all 'noresume ’ to ‘resume=’. I then ran zipl and rebooted.

Hi Harley,

great to hear that you got it fixed, and many thanks from my side for taking the time to document your solution here!

Best regards,
Jens