XEN hypervisor on SLES 12 SP3 on OPENSTACK CLOUD 7

Hi guys,
I have one case to ask. So, I have the OpenStack Cloud 7(SOC 7) up and running. One of the purpose is to provide VMs that people can use to do labs exercises. Currently, I have an instance on the cloud running SLES12 SP3 and i tried to install XEN hypervisor on top of that instance. After installation, the system ask to reboot to XEN kernel. I did reboot to the XEN kernel but it failed eventually. The instance shut down itself. So i tried to start it again and booted to XEN hypervisor kernel. Then the system hang forever at the step “A start job is running for dev-disk-by\x2duuid-ebdeead6\x2d…”. I have the screenshot. Can anyone tell me what is the problems? Are there any way to work around for that. Thanks alot. I am happy to give more details on the environment.

Screenshot:

ducle,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team
http://forums.suse.com

If your cloud is configured to use KVM instances (which is the default), instances will get virtio disks by default, that are named /dev/vda etc
Now when you install Xen in the VM and reboot, it does not have virtio drivers, because that is not implemented in Xen (sometimes things even crash) and that is why it fails to boot, because it cannot find its disk.

When one runs Xen in KVM outside of cloud, you can configure it to use IDE disks, e1000 NICs and no virtio for memory ballooning or serial consoles to make it work.

In a cloud, this is more tricky to get, but this could get you closer:

glance image-update \\ --property hw_disk_bus=ide \\ --property hw_cdrom_bus=ide \\ --property hw_vif_model=e1000 \\ $IMAGEID