Move SLES DomU from SLES10 SP4 host to SLES11 SP2 host-no go

Here’s my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit

Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2, I powered off the server.
Disconnected the LUNs via our SAN (we boot from SAN)
Created a new LUN0
Attached it to the server
Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical machine
After install, I patched server
Then I went into Yast and added on the XEN Hypervisor and Tools
All is good.

Now, I powered server off
I re-attached my two other LUNs that hold my DomU
I boot up server
Server can see LUN’s just fine

I manually went into virtual manager and created a paravirtualized VM with the same settings as what was on the SLES10 host (I have the xm -l file I exported so I know what the config was). I point it to the same physical disk (/dev/disk/by-id/scsi-bighairyGUID)

However, when the DomU tries to boot all I get is:
Boot failed
Boot loader didn’t return any data

???

The FULLY virtual machine actually boots, but for some reason cannot find the / partion (but obviously finds the /boot partition).

Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun, re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just fine.

Is there some conversion that needs to be done between a DomU that was originally created in SLES10 SP4 64-bit when going to SLES11 SP2 64-bit?
I couldn’t find anything in the SLES 11 SP2 docs EXCEPT if you created a VM in SLES10 (no SP), but that is not the case here.

Hi kjhurni,

SLES10 is quite some time ago for me, but I recall having had to change our DomU configurations to migrate Dom0 from SLES10 to SLES11.

I’d recommend to set up a new DomU on the SLES11 Dom0 and clone that config (pointing to the old DomU disk image, adjusting NIF MAC address and alike). I don’t remember having had to change anything inside DomU, just it’s configuration. But again - it’s been many moons since I’ve done this, so ymmv…

Regards,
Jens

[QUOTE=jmozdzen;9617]Hi kjhurni,

SLES10 is quite some time ago for me, but I recall having had to change our DomU configurations to migrate Dom0 from SLES10 to SLES11.

I’d recommend to set up a new DomU on the SLES11 Dom0 and clone that config (pointing to the old DomU disk image, adjusting NIF MAC address and alike). I don’t remember having had to change anything inside DomU, just it’s configuration. But again - it’s been many moons since I’ve done this, so ymmv…

Regards,
Jens[/QUOTE]

Thanks, but I think that’s basically what I did on the SLES11 SP2 Dom0:

Created a new VM in Virtual Manager, said I had an EXISTING disk with OS on it, pointed it to the same disk (old DomU disk), etc.

[QUOTE=jmozdzen;9617]Hi kjhurni,

SLES10 is quite some time ago for me, but I recall having had to change our DomU configurations to migrate Dom0 from SLES10 to SLES11.

I’d recommend to set up a new DomU on the SLES11 Dom0 and clone that config (pointing to the old DomU disk image, adjusting NIF MAC address and alike). I don’t remember having had to change anything inside DomU, just it’s configuration. But again - it’s been many moons since I’ve done this, so ymmv…

Regards,
Jens[/QUOTE]

Ugh, apparently it’s this:

http://www.novell.com/support/kb/doc.php?id=7002815

nasty way to “convert” things (either configure to use old stuff-probably not ideal) or go through the pain of mounting the DomU inside of the Dom0 (ack!)

Hi,

sorry then - I thought you had some way to re-import those exported settings. We’re mostly working with config files, so I’m not up to date what magic virtmanager can do nowadays :wink:

Any hints in the DomU’s log? Whenever I came across that error message, it was that the DomU loader was unable to cope with the disk or menu.lst layout. Anything unusual with i.e. the file system used for DomU’s /boot (maybe no longer supported by the new Dom0)?

Regards,
Jens

Thanks for reporting back.

Having to modify DomUs to run them under an upgraded Dom0 is a PITA, for sure.

How’s your HVM going, have you got that running, too?

Regards,
Jens

[QUOTE=jmozdzen;9621]Thanks for reporting back.

Having to modify DomUs to run them under an upgraded Dom0 is a PITA, for sure.

How’s your HVM going, have you got that running, too?

Regards,
Jens[/QUOTE]

Well I was hoping to use the “easy” part of the TID, but it’s extremely sparse (no details, etc):

[QUOTE]running domU’s on SLES 10 and SLES 11

If a DomU will be run on both SLES 10 and SLES 11 Dom0 hosts, install the DomU on SLES 10. After the DomU is created, use the /etc/xen/vm file to make changes to hardware and to start and stop the DomU.[/QUOTE]

But they don’t state which file (The .xml or the other one). Further, those files are only ever there ONCE (when you initially create the VM in XEN). Any changes made later (ie, RAM, etc.) are not reflected in them and Novell doesn’t list a way to get them exported there. the xm commands export to a completely different file/format.

I COULD use the domuloader method in the TID (I have the config files) but then not sure what’ll happen with the networking because in SLES 10 the network cards are defined with an entirely different syntax than in SLES11 (brX vs. vifXX)

[QUOTE]domUloader: use or import legacy /etc/xen/vm files

domUloader still ships with SLES 11, but is not the default boot loader for “vm-install” which is used by Virt-Manager and YaST. If you have not modified the domU using “xm” commands, YaST, or “Virt-Manager”, then you can re-import the DomU configuration. To do so simply type "xm new -f ".[/QUOTE]

I’m just wanting to make sure I don’t shoot myself in the foot with using the “legacy” stuff (that always implies stuff is going away and shouldn’t be used).

I suppose I could be daring and try this:

[QUOTE]pygrub: add a menu.lst file to existing hosts[/QUOTE] but these are physical disks, so I can’t use the TID they list, although I can easily fire up the SLES10 XEN host, boot into the XEN DomU and manually do what the rest of the section says to do, I guess.

kjhurni wrote:
[color=blue]

Ugh, apparently it’s this:

http://www.novell.com/support/kb/doc.php?id=7002815[/color]

I was thinking about your issue then saw this post. As I read through
the TID, I found it a bit confusing.

As I understand it, DomU’s running under a SLES 11 Dom0 need grub to
boot. Since you have both SLES 10 and SLES 11 Dom0’s available, can’t
you not just:

  1. Start DomU from your SLES 10 Dom0

  2. Install grub boot loader.

  3. Shutdown

  4. Start DomU from your SLES 11 Dom0

Just curious. It might save accessing DomU’s file system from Dom0.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…

[QUOTE=KBOYLE;9624]kjhurni wrote:
[color=blue]

Ugh, apparently it’s this:

http://www.novell.com/support/kb/doc.php?id=7002815[/color]

I was thinking about your issue then saw this post. As I read through
the TID, I found it a bit confusing.

As I understand it, DomU’s running under a SLES 11 Dom0 need grub to
boot. Since you have both SLES 10 and SLES 11 Dom0’s available, can’t
you not just:

  1. Start DomU from your SLES 10 Dom0

  2. Install grub boot loader.

  3. Shutdown

  4. Start DomU from your SLES 11 Dom0

Just curious. It might save accessing DomU’s file system from Dom0.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…[/QUOTE]

I can certainly try that, but the TID seems to indicate you can only do that during the initial installation time AND only on SLES11. That’s the first item in the TID:

[QUOTE]installation change: install a boot loader
During the installation of any Linux DomU under SLES 11, make sure to install Grub. Installing a boot loader will ensure that a /boot/grub/menu.lst file is populated, and will allow DomU’s to boot normally. [/QUOTE]

What’s puzzling is that 2 items in the TID indicate it’s because you’re missing the /boot/grub/menu.lst, but if I mount the paravirtualized physical disk /boot partition I do see a :
/grub/menu.lst

But it may be one of those odd/weird things with XEN Host that it’s not really there but it is?
However I think the first part of the TID kinda conflicts with the third item.

First item indicates all you need is the menu.lst file
Third item indicates that not only do you need that, it needs to contain a VERY specific line (which seems to vary from the existing one I have):

[QUOTE]title Default Kernel
root (hd0,0)
kernel /vmlinuz-xen root=/dev/xvda2 splash=silent showopts vga=0x31a
initrd /initrd-xen[/QUOTE]

I believe the one that SLES10 creates has a different kernel lint and initrd line there.

[QUOTE=kjhurni;9623]Well I was hoping to use the “easy” part of the TID, but it’s extremely sparse (no details, etc):

But they don’t state which file (The .xml or the other one). Further, those files are only ever there ONCE (when you initially create the VM in XEN). Any changes made later (ie, RAM, etc.) are not reflected in them and Novell doesn’t list a way to get them exported there. the xm commands export to a completely different file/format.[/QUOTE]
I think they’re talking about the “other” one - at least that’s what we’re using… and while the files are there until you delete them, you are right that they are not updated when you alter the definitions stored in the Xen config DB. Once you’re into clustering, you’ll want to only work with the files, as modifications to the Xen DB will only persist on a single Xen server, while you can share the files across many Xen Dom0s. But is should be easy to create up-to-date files, either manually or using virtmanager (just create a new VM with the proper settings and i.e. use the results as a template). Just keep in mind that you’ll have to actually DELETE the DomU definitions from the Xen store, else “xm create ” wont have the desired effect and “xm start ” will use the xen store definitions, rather than the config files.

Actually, they’re not different. SLES11 has simply dropped the Xen scripts to set up all the bridging environment - you do that in advance (outside the Xen configuration, ie via YaST) and then reference the bridge name you want the VIF to attach to, in the DomU config file. We did the same in SLES10 already, as we had other ideas of bridge names etc than the way Xen handled it :smiley:

I believe the DomU config file version will remain active for quite some time - it’s more difficult to run a Xen cluster with via a shared Xen store - AFAIK this isn’t available yet and IMO there’s no “business case” to implement this just to replace config files. But as boot loaders change and new features are available, the syntax and/or available commands within the config files will change over time… like when going from SLES10 to SLES11.

[QUOTE=kjhurni;9623]I suppose I could be daring and try this:

but these are physical disks, so I can’t use the TID they list, although I can easily fire up the SLES10 XEN host, boot into the XEN DomU and manually do what the rest of the section says to do, I guess.[/QUOTE]

Indeed a valid approach - just make sure those changes are compatible with the SLES10 environment, else you’re fixed to running a SLES11 Dom0 :wink:

With regards,
Jens

I also found the TID confusing. It implied that there was no menu.lst
and no grub boot loader. As I read it, resolving that would allow it to
boot however you say your ment.list is already present.

When boot fails, to you still have access to the console? Maybe there
are some error messages that would help? Perhaps there is something in
the kernel ring buffer if it is not too early in the boot process? It
can be displayed using “dmesg”. I think there is a key combination too
() but I’m not sure.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…

Using multipath on SAN?
Try to add 'features “no_partitions” (in multipath.conf)for your san luns.
This prevent multipath from creating maps for partititons,
those maps will prevent direct access to disks, and thats why vms are
not booting from lun.
This issue affect only paravirtalized domUs, Or
when paravirtualized drivers are installed on DomU (like Windows server).

My two cents,

Petri

On 19.10.2012 21:14, kjhurni wrote:[color=blue]

Here’s my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit

Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2,
I powered off the server.
Disconnected the LUNs via our SAN (we boot from SAN)
Created a new LUN0
Attached it to the server
Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
machine
After install, I patched server
Then I went into Yast and added on the XEN Hypervisor and Tools
All is good.

Now, I powered server off
I re-attached my two other LUNs that hold my DomU
I boot up server
Server can see LUN’s just fine

I manually went into virtual manager and created a paravirtualized VM
with the same settings as what was on the SLES10 host (I have the xm -l
file I exported so I know what the config was). I point it to the same
physical disk (/dev/disk/by-id/scsi-bighairyGUID)

However, when the DomU tries to boot all I get is:
Boot failed
Boot loader didn’t return any data

???

The FULLY virtual machine actually boots, but for some reason cannot
find the / partion (but obviously finds the /boot partition).

Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun,
re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just
fine.

Is there some conversion that needs to be done between a DomU that was
originally created in SLES10 SP4 64-bit when going to SLES11 SP2
64-bit?
I couldn’t find anything in the SLES 11 SP2 docs EXCEPT if you created
a VM in SLES10 (no SP), but that is not the case here.

[/color]

That should be features “1 no_partitions” in /etc/multipath.conf
SAN-device section.

On 21.10.2012 22:05, Petri Asikainen wrote:[color=blue]

Using multipath on SAN?
Try to add 'features “no_partitions” (in multipath.conf)for your san luns.
This prevent multipath from creating maps for partititons,
those maps will prevent direct access to disks, and thats why vms are
not booting from lun.
This issue affect only paravirtalized domUs, Or
when paravirtualized drivers are installed on DomU (like Windows server).

My two cents,

Petri

On 19.10.2012 21:14, kjhurni wrote:[color=green]

Here’s my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit

Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2,
I powered off the server.
Disconnected the LUNs via our SAN (we boot from SAN)
Created a new LUN0
Attached it to the server
Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
machine
After install, I patched server
Then I went into Yast and added on the XEN Hypervisor and Tools
All is good.

Now, I powered server off
I re-attached my two other LUNs that hold my DomU
I boot up server
Server can see LUN’s just fine

I manually went into virtual manager and created a paravirtualized VM
with the same settings as what was on the SLES10 host (I have the xm -l
file I exported so I know what the config was). I point it to the same
physical disk (/dev/disk/by-id/scsi-bighairyGUID)

However, when the DomU tries to boot all I get is:
Boot failed
Boot loader didn’t return any data

???

The FULLY virtual machine actually boots, but for some reason cannot
find the / partion (but obviously finds the /boot partition).

Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun,
re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just
fine.

Is there some conversion that needs to be done between a DomU that was
originally created in SLES10 SP4 64-bit when going to SLES11 SP2
64-bit?
I couldn’t find anything in the SLES 11 SP2 docs EXCEPT if you created
a VM in SLES10 (no SP), but that is not the case here.

[/color]
[/color]

[QUOTE=paca;9631]That should be features “1 no_partitions” in /etc/multipath.conf
SAN-device section.

On 21.10.2012 22:05, Petri Asikainen wrote:[color=blue]

Using multipath on SAN?
Try to add 'features “no_partitions” (in multipath.conf)for your san luns.
This prevent multipath from creating maps for partititons,
those maps will prevent direct access to disks, and thats why vms are
not booting from lun.
This issue affect only paravirtalized domUs, Or
when paravirtualized drivers are installed on DomU (like Windows server).

My two cents,

Petri

On 19.10.2012 21:14, kjhurni wrote:[color=green]

Here’s my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit

Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2,
I powered off the server.
Disconnected the LUNs via our SAN (we boot from SAN)
Created a new LUN0
Attached it to the server
Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
machine
After install, I patched server
Then I went into Yast and added on the XEN Hypervisor and Tools
All is good.

Now, I powered server off
I re-attached my two other LUNs that hold my DomU
I boot up server
Server can see LUN’s just fine

I manually went into virtual manager and created a paravirtualized VM
with the same settings as what was on the SLES10 host (I have the xm -l
file I exported so I know what the config was). I point it to the same
physical disk (/dev/disk/by-id/scsi-bighairyGUID)

However, when the DomU tries to boot all I get is:
Boot failed
Boot loader didn’t return any data

???

The FULLY virtual machine actually boots, but for some reason cannot
find the / partion (but obviously finds the /boot partition).

Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun,
re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just
fine.

Is there some conversion that needs to be done between a DomU that was
originally created in SLES10 SP4 64-bit when going to SLES11 SP2
64-bit?
I couldn’t find anything in the SLES 11 SP2 docs EXCEPT if you created
a VM in SLES10 (no SP), but that is not the case here.

[/color]
[/color][/QUOTE]

Interesting, yes, we are using multipathing. Odd that creating the devmapper stuff in SLES11 would break it, since it works fine in SLES10. Although I’m puzzled why the fully virtualized /boot partition works fine in SLES11 though if it was the mapping.

But I can give it a whirl and see what happens.

[QUOTE=KBOYLE;9628]I also found the TID confusing. It implied that there was no menu.lst
and no grub boot loader. As I read it, resolving that would allow it to
boot however you say your ment.list is already present.

When boot fails, to you still have access to the console? Maybe there
are some error messages that would help? Perhaps there is something in
the kernel ring buffer if it is not too early in the boot process? It
can be displayed using “dmesg”. I think there is a key combination too
() but I’m not sure.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…[/QUOTE]

The boot fails in that you cannot even start the virtual machine via virt manager.
The FULLY virtualized one, you can start, it boots up to the Grub menu, tries to boot and then complains it cannot find the / partition.

I may try manually editing the file and make the changes or something.

kjhurni wrote:
[color=blue]

Here’s my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit[/color]

I have one customer whose Dom0 is also SLES 10 SP4 64-bit. There were
issues, which have now been resolved, that prevented an upgrade. Upon
checking XenStore DomU configurations, some show:
[color=blue]

(bootloader /usr/bin/pygrub)[/color]

while others show:
[color=blue]

(bootloader /usr/lib/xen/boot/domUloader.py)
(bootloader_args ‘–entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen’)[/color]

…depending on the version of SLES on Dom0 when the DomU was created.

http://www.novell.com/support/kb/doc.php?id=7002815

…shows a sample configuration with[color=blue]

bootloader=“/usr/lib/xen/domUloader.sys”[/color]
but as you can see, mine is not the same.

Obviously, I’m very interested in any progress you make. I have a
production server here and need to understand just how an upgrade on
Dom0 to SLES11-SP2 will impact my DomU’s.

Thank you for sharing.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…

KBOYLE wrote:
[color=blue]

I have one customer whose Dom0 is also SLES 10 SP4 64-bit.[/color]

Correction:

The DomU is SLES11. That explains why one DomU was created with pygrub.

All DomU’s are working properly. I wasn’t even aware of the different
bootloaders until I looked. Still, I’ll wait until I see how you make
out before upgrading to SLES11-SP2. :slight_smile:


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…

And remember after changing/adding /etc/multipath.conf to run mkinitrd
and reboot the system.

On 22.10.2012 16:34, kjhurni wrote:[color=blue]

paca;9631 Wrote:[color=green]

That should be features “1 no_partitions” in /etc/multipath.conf
SAN-device section.

On 21.10.2012 22:05, Petri Asikainen wrote:[color=darkred]

Using multipath on SAN?
Try to add 'features “no_partitions” (in multipath.conf)for your san[/color]
luns.[color=darkred]
This prevent multipath from creating maps for partititons,
those maps will prevent direct access to disks, and thats why vms[/color]
are[color=darkred]
not booting from lun.
This issue affect only paravirtalized domUs, Or
when paravirtualized drivers are installed on DomU (like Windows[/color]
server).[color=darkred]

My two cents,

Petri

On 19.10.2012 21:14, kjhurni wrote:

Here’s my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit

Rather than upgrade the host via offline/inplace upgrade to SLES11[/color]
SP2,[color=darkred]
I powered off the server.
Disconnected the LUNs via our SAN (we boot from SAN)
Created a new LUN0
Attached it to the server
Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
machine
After install, I patched server
Then I went into Yast and added on the XEN Hypervisor and Tools
All is good.

Now, I powered server off
I re-attached my two other LUNs that hold my DomU
I boot up server
Server can see LUN’s just fine

I manually went into virtual manager and created a paravirtualized[/color]
VM[color=darkred]
with the same settings as what was on the SLES10 host (I have the xm[/color]
-l[color=darkred]
file I exported so I know what the config was). I point it to the[/color]
same[color=darkred]
physical disk (/dev/disk/by-id/scsi-bighairyGUID)

However, when the DomU tries to boot all I get is:
Boot failed
Boot loader didn’t return any data

???

The FULLY virtual machine actually boots, but for some reason[/color]
cannot[color=darkred]
find the / partion (but obviously finds the /boot partition).

Now, if I power off the server, remove the SLES11 SP2 Dom0 boot[/color]
lun,[color=darkred]
re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load[/color]
just[color=darkred]
fine.

Is there some conversion that needs to be done between a DomU that[/color]
was[color=darkred]
originally created in SLES10 SP4 64-bit when going to SLES11 SP2
64-bit?
I couldn’t find anything in the SLES 11 SP2 docs EXCEPT if you[/color]
created[color=darkred]
a VM in SLES10 (no SP), but that is not the case here.

[/color][/color]

Interesting, yes, we are using multipathing. Odd that creating the
devmapper stuff in SLES11 would break it, since it works fine in SLES10.
Although I’m puzzled why the fully virtualized /boot partition works
fine in SLES11 though if it was the mapping.

But I can give it a whirl and see what happens.

[/color]

[QUOTE=KBOYLE;9644]kjhurni wrote:
[color=blue]

Here’s my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit[/color]

I have one customer whose Dom0 is also SLES 10 SP4 64-bit. There were
issues, which have now been resolved, that prevented an upgrade. Upon
checking XenStore DomU configurations, some show:
[color=blue]

(bootloader /usr/bin/pygrub)[/color]

while others show:
[color=blue]

(bootloader /usr/lib/xen/boot/domUloader.py)
(bootloader_args ‘–entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen’)[/color]

…depending on the version of SLES on Dom0 when the DomU was created.

http://www.novell.com/support/kb/doc.php?id=7002815

…shows a sample configuration with[color=blue]

bootloader=“/usr/lib/xen/domUloader.sys”[/color]
but as you can see, mine is not the same.

Obviously, I’m very interested in any progress you make. I have a
production server here and need to understand just how an upgrade on
Dom0 to SLES11-SP2 will impact my DomU’s.

Thank you for sharing.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…[/QUOTE]

Okay I’ve come to the conclusion that the TID is full of all sorts of problems/inaccuracies

You cannot just do the:
xm new -f file

Because it’ll complain about the DomID (how do you determine the domid of the Xen host)?

Also in my case I am NOT upgrading the Xen host At least in the sense that I am not going to throw in the SLES11 SP2 .ISO, boot from it and “upgrade”. Last time I tried that it totally wrecked my networking setup from SLES10 SP3. (granted, it was SLES11 at the time).

kjhurni wrote:
[color=blue]

Okay I’ve come to the conclusion that the TID is full of all sorts of
problems/inaccuracies[/color]

Perhaps it’s time you provided some TID feedback. A revised TID may not
resolve your issue but it may prevent others from experiencing the same
frustrations as you are!

Despite the “inaccuracies” it also appears that the TID is incomplete.
It would be much more helpful is a single document addressed all the
undocumented issues one might encounter when attempting such an upgrade.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below…