SLES12 ESXi guest fails to boot when adding addl. disks

Hi all,
we are experiencing boot issues when adding vDisks to a SLES12 SP1 guest vm.

We have installed the template using the default disk layout with LVM on SCSI 0:0 using the VMware pvscsi SCSI-driver.

Now when we add additional vDisks (i.e. SCSI 1:0 or 2:0 disks) using pvscsi, the VM fails to boot. To me, it looks like the disk device ordering gets mixed up so the boot device is not found and the system fails into a network boot mode (that of course won’t ever succeed).

The VM is configured in BIOS mode and the GRUB2 bootloader is using MBR boot.

I wonder what are the best practice for SLES12 VMware guest VM’s so that the system ALWAYS boots from SCSI 0:0 being /dev/sda, no matter how many additional disks we add / change / remove.

Any ideas?

Thanks & regards,

Arnold

Hi Arnold,

I might be wrong, but to me it sounds as if the virtual BIOS already tries to boot from the wrong disk. Are you able to configure its boot settings to look at the proper disk?

If it were the Linux bootloader or something in later stages of the Linux boot, some “no root fs found”-type of message could be expected. A system typically enters network boot, if no local boot device was found to have a proper boot sector.

I wonder what are the best practice for SLES12 VMware guest VM’s so that the system ALWAYS boots from SCSI 0:0 being /dev/sda, no matter how many additional disks we add / change / remove.

There is no such practice in Linux (note that I’m not saying SLES here!). Linux has moved to dynamically assigned device names long ago.

You can reference disks / file systems in various ways, either hardware IDs, physical location (bus + subadressing), file system UUID or file system label - just pick the reference that suits your specific needs best.

But again: From your description, your VM is still at the BIOS stage and has not handed off to the Linux boot loader, so it looks more like a VMware / VM BIOS configuration issue.

Regards,
Jens