Converting SLES VMs from VMWare to Hyper-V

Hello,
I have a site in the process of converting from a VMWare 5.5 environment to a Microsoft 2012 R2 Hyper-V environment. We are using Microsoft Virtual Machine Converter 3.1 to handle the conversion. We have both SLES 11 SP3/SP4 VMs and SLES 12 VMs to convert.

For the SLES 11 VMs, as long as I add “hv_storvsc” to the INITRD_MODULES in /etc/sysconfig/kernel BEFORE the migration, it works perfectly.

For the SLES 12 though, I’ve been struggling. After the conversion, the VMs won’t boot because they cannot find the disks. In SLES 12 the disks are all mounted by-uuid. I tried adding the hv_vss_daemon, hv_fcopy_daemon, and hv_kvp_daemon to the VM before the conversion, but that made no difference. I’m not sure what the best way to rectify this would be. Has anyone done this and can you provide any pointers? Thanks.

Matt

Hi Matt,

while I have no experience with Hyper-V, your description sounds as if not knowing the disks’ UUIDs prior to migration is the root issue. If that’s true, you might just convert to mounting by label prior to conversion.

On the other hand, I’m surprised that the UUID of the file system changed - can you check via maintenance login (after migration) that the system can actually access the virtual disks and is really just looking for the wrong device?

Regards,
J.

Well after many trials and tribulations, I finally figured out a procedure to prep the SLES 12 VMs before the migration. Here is what I’ve found that works:

  1. Install the Hyper-V support modules: zypper in hyper-v

  2. Enable the Hyper-V support modules:

systemctl enable hv_vss_daemon.service
systemctl enable hv_fcopy_daemon.service
systemctl enable hv_kvp_daemon.service

  1. Edit /etc/dracut.conf and change the add_drivers line to this:

add_drivers+=“hv_vmbus hv_netvsc hv_storvsc”

  1. Run the following command to rebuild initramfs:

dracut -f -v

I have been installing all current patches and making sure the VM still boots under VMware before the migration as well. You also have to permit root login over SSH so the Microsoft converter can login.

So far, if I do all that the conversions seem to work fine. I took a lot of this from RedHat info as I could find absolutely nothing talking about SuSE on Hyper-V.

Hope this helps someone!

Matt

Hi Matt,

thank you a lot for providing this contribution! At least people searching for this info now have something to read.

BTW, since your steps to success basically enable the Hyper-V services in a SLES12-compatible way, it supports the earlier assumption that not the use of UUIDs was the source of your trouble, but already access to the virtual device itself was missing. Good to known, too!

Regards,
Jens

[QUOTE=jmozdzen;32696]Hi Matt,

thank you a lot for providing this contribution! At least people searching for this info now have something to read.
[/QUOTE]

Yes, it was very frustrating not finding anything. I thought Microsoft and SuSE had this strong relationship?

I thought about changing the mount type to by name, but I wasn’t sure what impact that would have. I’m still kinda new to SLES 12 and it seems UUID is the preferred method. I figured it was more of a driver issue. I had the same problem with the SLES 11 boxes, just had to make sure I added the storage driver before the conversion.

Overall they are working fine, but the Hyper-V admin is questioning whether we have all the “proper” tools installed for Hyper-V support. The SLES VMs don’t report their tools version like the Cent OS and Windows servers do. Here is a screen shot:

I’m not sure what “Detected” means for the SLES VMs.

Matt

Hi Matt,

as already mentioned, I’m no Hyper-V person, so I can only guess that the drivers you installed do not report their versions, for whatever reason. And quoting from https://technet.microsoft.com/de-de/library/dn531027.aspx :

If you need more technical details, I’d suggest to open a service request to have some SUSE support engineer find out applicable and correct answers. Unless, of course, someone else chimes in here and fills the gaps I’m leaving in my answers :slight_smile:

Regards,
Jens

I had a ZENworks 2020.2 Appliance running on SLES 12.5 built using KIWI. I modified the instructions a little to bring the VM over to hyper V but the fundementals are the same:

ZENworks and iPrint Appliance - ESXi to HyperV Migration

NOTE: The Microfocus/OpenTEXT ESXi and HyperV Appliances are ultimately the same images, HOWEVER, if you use the ESXi image and try to migrate it to HyperV, ONLY the ESXi/VMWARE Virtual Disk Drive DRIVERS are available, and OR if you use the HyperV image, of the appliance and try to Migrate it to ESXi, in both scenarios Migrating from one ESXi/VMWARE→HyperV OR HyperV→ESXi you will find that once you migrate, your Appliance will NOT boot.

This is because the Virtual Drive DRIVERS on both platforms are ONLY enabled for the Appliance flavor you chose. So if you choose the ESXi/VMWARE OVA to deploy the appliance, you will ONLY have the ESXi/VMWARE Virtual Drive DRIVERS enabled. The same goes for the HyperV Appliance, .vhdx file, if you deploy that for HyperV, you will ONLY have the HyperV Virtual Drivers Available, so when you migrate, you will not boot.

Solution: In my Scenario, I want to go from ESXi to HyperV. It turns out the modules are already installed but NOT activated. So we will:

  • Enable the Hyper-v modules.

  • Edit /etc/dracut.conf and CREATE a special config file /etc/dracut.conf.d/02-hyperv.conf and enable the drivers in BOTH files (because I actually don’t know which conf file activates the drivers, so I loaded the drivers in both)

  • Rebuild the kernel/intrimFS with dracut, so that the drivers are activated in the Linux Kernel for the appliance.

Pre-requisite:

  • Shutdown your PRODUCTION ESXi/VMWARE ZCM 2020.2 Appliance

  • Use Acronis OR StarWind Migrator to clone or Migrate the machine to Another ESXi Hypervisor. (This is so we are working with a COPY of the VM)

  • On the new ESXi Instance of the Appliance, make sure the Virtual NIC is NOT connected on PowerOn.

  • Load YAST2

  • Change the ip address FROM 10.59.XX.XX to some other free IP address (So we are not interfering with the PRODUCTION server, we can always change it back to the real IP once we have it booting

  • Once changed, turn off the VM and connect the Virtual NIC

  • On boot up, you will have internet and LAN connectivity.

  • Perform the steps below

  1. Install the Hyper-V support modules:
  • zypper in hyper-v

(they are already installed, so just move to step 2, I left this in in case you use a version of sles 12.5 or the appliance where they are not enabled.

  1. Enable the Hyper-V support modules:

systemctl enable hv_vss_daemon.service
systemctl enable hv_fcopy_daemon.service
systemctl enable hv_kvp_daemon.service

  1. Edit /etc/dracut.conf and change the add_drivers line to this:
  • add_drivers+=“hv_vmbus hv_netvsc hv_storvsc”
  1. Add a driver load file IN /etc/dracut.conf.d (A directory), to load the HyperV drivers as well, just in case (The appliance seems to load from these files)
  • cd /etc/dracut.conf.d
  • vi 02-hyperv.conf
  • Hit “i” fir insert
  • Add the line: add_drivers+=“hv_vmbus hv_netvsc hv_storvsc”
  • Save the file with :wq
  • Make sure there are NO syntax errors
  1. Run the following command to rebuild initramfs:
  • dracut -f -v
  1. Now that the drivers are injected, re-migrate or CLONE the machine from ESXi to HyperV VM VIrtual Drives (Starwind or Acronis)

  2. I would highly suggest bringing up the Machine on a PRIVATE HyperV network switch so that ONLY another Windows VM on the HyperV server can access your server.

  3. The machine should boot now, if it does, as long as it is on the Private HyperV switch, and can only talk to other machines on that Private HyperV switch (LAN access and NO Internet Access), then Change the IP back to the ZCM Appliance original IP in our case it is 10.59.XX.XX

  4. Make sure you can ping other devices on the Private Virtual Switch

  5. Reboot the ZCM 2020.2 Appliance

  6. From the Windows workstation on the same Private HyperV Switch, make sure you can ping the ZCM Appliance at 10.59.XX.XX, if you can, test EVERYTHING now, the web interface, make sure all data is there, make sure its making an LDAP connection, make sure, you can remote machines, make sure you can install and roll out bundles, test MDM functions.

  7. If it all looks good, your migration should be complete!!!

  8. Shut down your Migrated HyperV ZCM 2020.2 Appliance, and change the Virtual NIC to be on your PUBLIC/REAL LAN Virtual HyperV switch. Bring the server back up. You are live now. TEST. Make sure PXE is working live on the network as well as imaging :slight_smile:

1 Like