[QUOTE=x0500hl;27968]I have a SLES 11 SP3 on System z installation (runs as a virtual server under z/VM 6.3) that was installed onto a LUN on an IBM V7000. Apparently, the SAN box has had some major issues and the SAN team has migrated all of the LUN’s to a replacement box. They have provided me the new WWPN and other info.
I copied the new SAN info into my VM Directory and can start the boot process. I believe that SLES 11 stores this info in the bootloader, and possibly elsewhere, but I can’t seem to find any reference on how to modify the system so that it will totally boot from the new SAN.
This is a test (sandbox) installation that was put together to test that we could actually install Linux to SAN on the mainframe. All of my other Linux instances use tradition mainframe (ECKD) dasd.
What do I need to do to modify (for documentation purposes) the installation to get it to boot?[/QUOTE]
since no-one else picked this up, you’ll have to cope with me I know SANs, I know SLES11SP3, I’ve used that combination for years… but I have no practical System Z experience, only x86. So much for the disclaimer.
I would have assumed that your virtualization layer will take care of mapping SAN LUNs to your VM, unless you have your SLES configured to access SAN resources directly (which, at least when virtualizing on x86, is rather unlikely).
You have not stated that you have any actual problems - have you already tested this out (“I copied the new SAN info into my VM Directory and can start the boot process.”) and failed? If so, maybe the reason for this lies in the actual file system configuration: On x86, I can decide between several ways to mount a file system. Per its device node, per UUID, per label are the most commonly used. Maybe the mapping used in your SLES VM needs to be updated to references to the new disk? This depends on how your virtualization layer reacts to the changed SAN configuration.
And although you wrote “the SAN team has migrated all of the LUN’s to a replacement box”, depending on the nature of the error you see, maybe the migration didn’t work out as expected and wrecked your LUN? That’s something I wouldn’t put my bet on, but if everything else looks right and things still fail, I would check that out as well.