Xen to KVM:- Cannot find dva2 fall back to xvda2

I am trying to convert a XEN vm to KVM and have followed the SUSE XEN to KVM Guide. When the system starts, it cannot find the vda1 and vda2 disk drives. Also, before this, there is an error which appears regarding xenblk. I have searched all the files mentioned in the guide, and do not have any mention of xenblk or xvda2 within any of them. Any ideas of what I am missing? I am using virtio as per the guide.

Cheers,

ChasR.

chas,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

Has your issue been resolved? If not, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.

Good luck!

Your SUSE Forums Team
http://forums.suse.com

chas wrote:
[color=blue]

Any ideas of what I am missing?[/color]

Not really…

Since no one else has replied it’s likely because no one else does
either. Remember we are just volunteers who answer the questions we can.

In your case, I would suggest you open a Service Request with SUSE.
They’ll have the necessary expertise to get to the bottom of this.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

On 28/03/2014 08:14, chas wrote:
[color=blue]

I am trying to convert a XEN vm to KVM and have followed the SUSE XEN to
KVM Guide. When the system starts, it cannot find the vda1 and vda2 disk
drives. Also, before this, there is an error which appears regarding
xenblk. I have searched all the files mentioned in the guide, and do not
have any mention of xenblk or xvda2 within any of them. Any ideas of
what I am missing? I am using virtio as per the guide.[/color]

Firstly which version(s) of SLES is/are your host and guest running?

When you say you’ve used the “SUSE XEN to KVM Guide” you mean SUSE’s Xen
to KVM Migration Guide @
https://www.suse.com/documentation/sles11/art_sles_xen2kvmquick/data/art_sles_xen2kvmquick.html
?

What do you mean by “before this” when you mention the error re xenblk?
As part of the migration you need to edit /etc/sysconfig/kernel and
change the xenblk and xennet references in INITRD_MODULES.

HTH.

Simon
SUSE Knowledge Partner


If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.

Hi Thanks for getting back to me. At the present point in time, I have migrated an Windows Xp desktop, Windows 2012 Server and SLES11 SP3 server to the new KVM server from the XEN server. I have followed the SUSE XEN to KVM publication and these worked OK apart from a few gotchas on the windows machines (I will document these once I get the other problem sorted). The SLES11 SP3 server was an original SP3 install and had not been upgraded.

The two server which I am having a problem with are an SLES11 SP3 which was upgraded from a SLES 11 Original install and an SLES 11 SP2 (OES 11.1 Patch level 1) which came from an SLES 10 original install. Both exhibit the same problem in that they cannot find the vda1 and vda2 file system. I have tried this now many times and checked the .xml file against the successful SLES migration and a new SLES11 SP3 test server together with the kernel, and other related files shown in the migration manual. Also changed mtab which is not mentioned in the manual, but in every case the boot process hangs waiting for device vda1 and then device vda2. I have tried changing the bus to ide, names to sda and hda but nothing makes any difference, as the boot process cannot detect the disk partitions.

The xenblk error was my own fault, as I removed it both from the line suggest in the manual, and a line further down in the kernel file. One this other entry was left alone, no errors occurred here.

Any suggestions where I can look next to correct this problem would be most welcome.

Regards

ChasR

Hi ChasR,

do you by chance have the original error messages?

“vda1” and “vda2” do so much sound like remains of “xvda1” and “xvda2”, which would have been the first two partitions on the (first) Xen virtual disk. How does KVM present that disk (name-wise) to the virtual machine, and where exactly are these two reported as not to be found? (My bet is it’s during file system check and therefore a glimpse at the /etc/fstab file of the VM would be in order - make sure the KVM-specific file names are entered there, or switch to label-based references…)

Regards,
Jens

Hi, fstab and all the other changes to the drives are made according to the Suse xen to kvm migration manual (this worked fine for 1 vm) I have regenerated the image file (mkinitrd) after all the changes are made, so that the virtio drivers (hopefully) would be available to KVM, but I still get the waiting for /dev/vda1 and the waiting for /dev/vda2 when the vm is booting. I get the option to try the xen drives at this point y/n (this does not work) and when I select n, I return to the $ prompt. Here I can access some directories, but there is no fstab in the etc directory, but there is an mtab file but with no disk entries. guestfish correctly identifies the sda1 and sda2 drives from the raw image, but the kvm boot process appears not to be able to find any drives at all. I would have thought that regenerating the image file would do the trick, but I have had no luck with this approach.

Thanks,

ChasR.

Hi ChasR,

What are the correct device names under KVM, according to your /etc/fstab? I haven’t toyed with KVM yet, so I simply don’t know.

Sounds like you’re still in the initrd phase. Usually it only checks for a single file system - root. What root fs device does your boot loader point to?

The initrd environment is pretty limited, indeed. Once you hit that shell, you at least might try to create a temp mount point and try to mount the actual root fs:

$ mkdir /mntroot $ mount /dev/sda1 /mntroot
Replace /dev/sda1 by the actual device carrying your root partition.

Regenerating the initrd only helps if specific drivers need to be included in the initrd environment (including stuff like LVM and dmraid).

In what setup did you call “mkinitrd” - when booted in Xen? mkinitrd by default takes the current setup to detect which root device to use - if that name will change because of a new virtualization layer, you’ll have to specify that manually - as you will have to make sure that all “drivers” for the new environment are included.

I’m still puzzled by the naming, though. What was you root device when booted in Xen? /dev/xvda1? Where does “vda1” come from?

Regards,
Jens

Success at Last!!!
I forgot to change the xml file (which I was playing about with yesterday) to the correct virtio and vda entries in the driver part after regenerating the image with mkinitrd. After retrying the startup again, this time the vm loaded properly and the vda1 and vda2 disks were found and recognised. Looks like it is worthwhile regenerating the image (mkinitrd) when you do a xen to kvm conversion.

I will document what I found with the windows and linux vms if there is a requirement for this.

Regards

ChasR.

Hi ChasR,

great that you found the root cause! And thank you for reporting back, too.

I’m sorry I got mis-lead by the device naming - seems that vda is a typical name for para-virtualized guests under KVM.

I will document what I found with the windows and linux vms if there is a requirement for this.

I’d suggest to add comments to https://www.suse.com/documentation/sles11/art_sles_xen2kvmquick/data/sect1_article_set_en.html - indeed, manually running mkinitrd is not mentioned there (the author probably saw it run implicitly when installing the default kernel), an according hint would be reasonable.

Or you might want to put an article up at https://www.suse.com/communities/conversations/ and earn some points?

Regards,
Jens

Hi,
Just some further important info. As installing the kernel runs mkinitrd, the list of procedures in the SUSE XEN to KVM migration manual is incorrectly ordered. All the changes to the files (fstab, inittab etc) should be made first :- EXCEPT GRUB. Once these changes have ALL been made, install the new default kernel (which runs mkinitrd and takes in all the correct information for the virtio drivers), and then make the changes to GRUB. This should be the correct order of operations for the migration to work, although I intend to carry out this procedure tonight on my final vm migration, and will report back tomorrw with the results.

Cheers,

ChasR.

Hi ChasR,

I believe neither fstab nor inittab are part of initrd, but are read from the “real” root fs. The drivers are configured by the statements in /etc/sysconfig/kernel, so that change should actually pick up the proper drivers on a succeeding mkinitrd run. If that doesn’t work for you, I’d really like to know what I have overlooked, so please report back!

Regards,
Jens

Hi,
Will try the sequence of operations I suggest tonight, and try to come to some conclusion as to where the problem lies. Don’t forget that the migration of the 1st linux box which worked correctly was a SLES11 SP3 original install and had not been upgraded. I followed the procedure in the manual as per the sequence of operations and it worked fine. No need for mkinitrd. It was the other 2 which had been upgraded from earlier versions where I had the problem! The drivers (or links of whatever) were not available in the initrd until I ran the mkinitrd after which everything worked OK.

Cheers,

ChasR.

Tried 2 Scenarios last night. First time, I followed the order in the SUSE Xen to KVM manual, and when starting the vm, I got to the stage waiting for vda1 then waiting for vda2. (This is the scenario where mkinitrd should be run at the end of the conversion before the xen vm should be shut down).
2nd time, did all the file changes (fstab etc) left grub changes alone. Installed the default-kernel (which automatically runs mkinitrd) and then changed Grub to suit. This time, vm booted perfectly under kvm.

I leave you to draw your own conclusions!!

Regards

ChasR.

Hi ChasR,

you made me curious :slight_smile:

Did you see any indication that mkinitrd was run at all in your first test? This should have been the case when updating the kernel, and you might have seen messages output by “mkinitrd” in the log (depending on how you invoked the install of the default kernel).

Regards,
Jens

Yes, it ran at the end of the kernel install like the second test and the output scrolled to the terminal window. Sorry, did not get a chance to check the output for the virtio as I was working against time, but for some reason, it did not pick up the required elements, although it worked on the SLES11 SP3 server shich had not been through an upgrade stage. Glad I now have a full proof way of getting the migration to work.

Cheers.

ChasR

chas wrote:
[color=blue]

for some reason, it did not pick up the required elements,[/color]

Perhaps this is why:

TID 7011861 Invalid root file system after upgrading or applying
patches.

https://www.suse.com/support/kb/doc.php?id=7011861

I do not know if this issue is still present in SLES11-SP3.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…