Boot problem after Restore

Hi Stefan,

when you answer “n” to “want me to fail back to /dev/system/root_lv? (Y/n)”, it should take you to a minimalistic shell (within the initial RAM disk environment). There you’d have (pretty limited) chances to get your root LV available. If you’re able to activate the system VG from there manually, exiting that shell would get you running (once) again. It still then is required to find out why the system VG could not be found and fix that:

It’s that the initrd knows that your root file system is on /dev/system/root_lv, but that cannot be accessed because the complete volume group is unavailable (“no volume group found”). Typically, this is because some hardware driver is missing, so that the physical volumes (“partitions”, in your case), are unavailable.

[…rescue system, then…] file \etc\fstab is empty and when I fill it with the correct partitions the system do not seems to remember it.

When you boot the rescue system via DVD, you’re not running from your disk - you have a completely separate environment set up, loaded from DVD, even with it’s own root fs. So the first steps to take are:

  • “vgscan” to let the system find the disks/physical volumes/volume groups containing your “real” system files
  • activate the root vg (“vgchange -ay system”, since your VG is called “system”)
  • mount the “real root” (i.e. “mount /dev/system/root_lv /mnt”)
  • mount any other required file system (var to /mnt/var, usr to /mnt/usr, …)
  • mount your boot file system to /mnt/boot
  • mount /sys and /proc ("mount --bin /sys /mnt/sys; mount --bind /proc /mnt/proc)
  • “chroot /mnt” to “switch” to your installed system - sort of. This is not your installed system (kernel etc), but only the file systems.

That environment then is pretty complete to do any maintenance/repair work, i.e. to invoke “mkinitrd” to see and/or influence how the initrd is created.

is there any way where I do not have problems like this when I restore SLES 11 on another HW?

By staying as close as possible to the original hardware and by setting up the system in a way that you know where it is bound to characteristics of your original hardware (i.e. hardware IDs, MAC addresses, port names/numbers etc)

Regards,
Jens

The steps I have to execute via Rescue System via DVD?

from normal system when I make vgscan it says

[QUOTE]reading all physical volumes. this may take a while…
no volume groups found[/QUOTE]

thank you so far.

I did it via DVD,

vgscan found volume group “system”

vgchange -ay system said

2 logical volumes in volume group “system” now active

but then when I try mount it say, that the folder system does not exist.

i looked into /dev and /etc and there was now folder.

[QUOTE=stefan_1304;18562]thank you so far.

I did it via DVD,

vgscan found volume group “system”

vgchange -ay system said

2 logical volumes in volume group “system” now active

but then when I try mount it say, that the folder system does not exist.

i looked into /dev and /etc and there was now folder.[/QUOTE]

what command exactly are your trying to execute to mount the logical volume?

When you activate a volume group, usually a folder is created in /dev with the name of the volume group (so it’s /dev/system in your case, which I read does exist) and for each logical volume, a corresponding file pointing to the device mapper node is created in that folder. So when such a logical volume in VG “system” (let’s call it “lv_test”) contains a file system, you’d mount it at “/mnt” via the command “mount /dev/system/lv_test /mnt”.

What LVs are linked in the /dev/system folder after activating the VG?

from normal system when I make vgscan it says …

what’s that “normal system” you’re writing about? I thought you couldn’t start the clone… is that “normal system” the “master system”, where the backup was taken? Why would your backup reference a root volume on LVM (which we confirmed a few days ago), but not know about the VG?

Regards,
Jens

Hi Stefan,

seems I got that wrong - you probably meant “I looked … and there was no folder.” If that’s the case, try de-activating the volume group and then reactivating it.

If the VG is already active, but for some reason the files weren’t created, “activating” does report the LVs as active, but doesn’t create the files…

Regards,
Jens

Hi Jens,

sorry for the late response.
How can i deactivate the vg?

[QUOTE=jmozdzen;18567]Hi Stefan,

seems I got that wrong - you probably meant “I looked … and there was no folder.” If that’s the case, try de-activating the volume group and then reactivating it.

If the VG is already active, but for some reason the files weren’t created, “activating” does report the LVs as active, but doesn’t create the files…

Regards,
Jens[/QUOTE]

Hi Stefan,

[QUOTE=stefan_1304;18585]Hi Jens,

sorry for the late response.
How can i deactivate the vg?[/QUOTE]

it’s as easy as activating it… “vgchange -an system” ("-a" → “should the vg be active?” with possible answers "y"es and "n"o)

Regards,
Jens

ok I did vgchange -an system and then vgchange -ay system.
after that there is still no folder system in /dev

Hi Stefan,

[QUOTE=stefan_1304;18588]ok I did vgchange -an system and then vgchange -ay system.
after that there is still no folder system in /dev[/QUOTE]

what’s the environment you’re testing in, currently? initrd? Recovery system? If recovery, booted from which DVD? Did vgscan find the VG? What does “vgdisplay system” report? …

Regards,
Jens

Stefan,

have you tried running it with “–debug” to get more verbose output?

Regards,
Jens

Hi Jens,

I try with the boot DVD in rescue mode.

vgscan find volume group system and

vgdisplay system
displays output

and with --verbose, this is the output:

http://www.directupload.net/file/d/3503/qboenfzh_jpg.htm

king regards.stefan

[QUOTE=jmozdzen;18590]Stefan,

have you tried running it with “–debug” to get more verbose output?

Regards,
Jens[/QUOTE]

Hi Stefan,

from the messages I assume that only the DM files are created - I suspect you will find them as /dev/mapper/system-root_lv and /dev/mapper/system-swap_lv. Creation of the /dev/system/ links may be dependent on udevd, which is probably not running in the SLES recovery environment. (I have yet to use that… somehow, grabbing an openSUSE USB stick was more convenient at my place of work :smiley: )

While you’re testing all this - one of my suggestions was to answer “n” to the “fallback question” during initrd - that should drop you to a minimalistic shell. If you’d be able to activate the VG in that environment, exiting that shell (via “exit”) ought to get you booting into the production system, or at least get you across the “root file system not found” hurdle :wink: Have you had a chance to try that?

A more general question: How different from the original server’s hardware is this new system? Is it basically the same, with just different IDs of the various hardware parts, or is it made of completely different components, i.e. different storage controllers, other disk types (4k instead of 512b blocks) or something along that line? The trouble you’re experiencing seems a bit unusual to me for “moving” a system from one server to another of the same build.

Regards,
Jens

Hi Jens,

I found the folders unter /dev/mapper, like you have supposed.

The source system is installed on a single HDD (it is only test-system too) and the CPU, Board is different too.
Else I tried to restore the server with extra raid controller, on onboard raid and on single HDD.

I treid with “n” and then “exit” and then I read:

[QUOTE=stefan_1304;18598]Hi Jens,

I found the folders unter /dev/mapper, like you have supposed.

The source system is installed on a single HDD (it is only test-system too) and the CPU, Board is different too.
Else I tried to restore the server with extra raid controller, on onboard raid and on single HDD.

I treid with “n” and then “exit” and then I read:[/QUOTE]

yes, now you’re in the “initrd” mini shell. (“mini” in terms of accessible programs - close to none :[). try activating the volume group… I have no comparable system at hand, so I don’t know if the “helper symlinks” are available. If they are, the command sequence would be

  • vgscan
  • if “system” could be found: “vgchange -an system; vgchange -ay system”
  • exit

If already “vgscan” is not found, you’ll use the “lvm shell”:

  • “lvm” (starts the program “lvm”, which has its own command line)
  • “vgscan”
  • “vgchange -ay system”
  • “exit” (to exit “lvm”)
  • “exit” (to exit the initrd shell and to continue booting)

If you’re able to activate the VG there, then the boot sequence should continue normally.

Else I tried to restore the server with extra raid controller, on onboard raid and on single HDD.

It might be too much hassle to try to adapt a cloned image from a too different source machine - you’d be better of with a fresh install unless you know what you’re doing and know where to adjust the image prior to booting.

Regards,
Jens

in the “initrd” mini shell vgscan says no volume groups found.

I typed lvm
lvm> vgscan

king regards. stefan

Hi Stefan,

[QUOTE=stefan_1304;18602]in the “initrd” mini shell vgscan says no volume groups found.

I typed lvm
lvm> vgscan[/QUOTE]

then I assume that the (disk) hardware needs drivers (“modules”) not included in the current initrd image. Your best guess is to boot the rescue system, mount & chroot to the installed system and then to rerun “mkinitrd”. You might want to have a look at http://technik.blogs.nde.ag/2014/01/05/linux-initrd-command-line which is giving some details and describing some possible steps to take. Especially important (from the “mkinitrd” point of view, if run from the chroot environment) is properly mounting /sys and /proc via the “–bind” option, else mkinitrd will not have access to the required information.

Regards,
Jens