New SUSE 11 SP4 VM on Power won't boot after adding new disk


Hi everyone! I’ve got an issue with a new SUSE 11 SP4 VM I need some help with. I have a brand spanking new Power S812L that I am running some tests on. I have everything installed and set up, with various VMs installed already (only one other with SUSE 11, but it only has one disk) In a nutshell, I have done the following things:

  1. Created a new VM with SUSE 11, and installed the OS onto a 20GB disk
  2. Shut down the VM and created a new disk in Virsh, which has then been allocated to the new VM
  3. Booted the VM
  4. Watched it fail when it can’t find /dev/vda3
  5. Listed the disks in /dev and realised that the disks are now showing as follows:

brw-rw---- 1 root disk 253, 0 Jul 27 08:23 /dev/vda
brw-rw---- 1 root disk 253, 16 Jul 26 09:46 /dev/vdb
brw-rw---- 1 root disk 253, 17 Jul 26 09:46 /dev/vdb1
brw-rw---- 1 root disk 253, 18 Jul 26 09:46 /dev/vdb2
brw-rw---- 1 root disk 253, 19 Jul 26 09:46 /dev/vdb3

  1. Removed the disk and re-booted, which worked fine

Does anyone know why SUSE has decided my second disk should now be vda and the primary become vdb? Is there anything I can do in the boot config to stop this? Grub and fstab are irrelevant here, since it’s not even finding the boot partition, so I’m figuring this may be an issue with the XML file itself…?



rdbsupport wrote on Dienstag, 26. Juli 2016 11:34 in :


Does anyone know why SUSE has decided my second disk should now be vda
and the primary become vdb?[/color]

Maybe you want to have a look at the boot option in VMM? There you can
choose the disk to boot from.



What is VMM? This is Power on Linux, so I have command line virsh and web GUI Kimchi to work with here. However, they are both sort of redundant here, since the disk is configured in the XML file to be vdb, and the primary to be vda, but SUSE sees them the other way around…




OK so I have fixed the issue myself. What I realised is that SUSE uses Yaboot to boot the server. Changing the config to use the UUID was the answer. So, the following:

root = /dev/vda3

Was changed to this:

root = “UUID=xxxxxxxx-xxxxxxx-xxxxxxxxxxxxxxx”

…or something similar :wink:

I got the UUID from /dev/disk/by-uuid. There is one listed for each partition of the disks that exist (besides swap, of course).

The server booted fine with the new disk attached, though the new disk still comes in as /dev/sda instead of sdb. Oh well, this is why we use UUIDs!




Incidentally, the UUID can be found by typing blkid, which gives me this:

testserver:~ # blkid
/dev/vda1: UUID=“5cdfd035-613d-403e-af4f-5e072584818a” TYPE=“ext4”
/dev/vdb2: UUID=“df50d5be-6cd9-41ec-ba12-551b436ee851” TYPE=“swap”
/dev/vdb3: UUID=“293421d4-dba6-4a3e-ac73-fcd34972059d” TYPE=“ext3”

Hope this helps somebody somewhere at some point :slight_smile:



Hi Tom,

thanks for sharing the solution!

While I don’t know if it will work with Yaboot (I believe this was added in 2004, but never used it myself), but with i.e. “mount” you can also access the disks via the file system labels. I find it useful to use these as they are easier to read by humans. And I pay special attention to create a unique label when the file system is not fixed to a system (i.e. on a removable disk that gets attached to other systems, too) to avoid confusion for the operators :wink:

udev will create according links in /dev/disk/by-label, “blkid” will show the labels as LABEL=“yourLabel” and you can set them either when creating the file system or i.e. via the “e2label” (for ext2/3/4 file systems) command at any later time.



Thanks for adding that to this, you’re right, and I’m not sure if Yaboot uses labels either, but I’ll be sure to test it the next time I have issues.

I also forgot to mention in my solution where the yaboot config was! The configuration on my system was in /etc/yaboot.conf.