Boot problem after Restore

Hello,

I made a Backup via DriveSnapshot of my SLES 2011 and wanted to Restore it.
The Restore worked quite fine, but when I want to boot, I get the following errors:

Then i type in root-password and it says

I have to say that I am beginner working with SLES 2011. The restore was on different hardware.
You have any ideas to help me? thank you very much.

On 23/12/2013 10:44, stefan 1304 wrote:
[color=blue]

I made a Backup via DriveSnapshot of my SLES 2011 and wanted to Restore
it.
The Restore worked quite fine, but when I want to boot, I get the
following errors:
[color=green]

fskc failed for at least one filesystem (not /).
Please repair manually and reboot
The root file system is already mointed read-write.
attention: only control-D will reboot the system in this
maintenance mode. shutdown or reboot will not worl
give root password for maintenance
[/color]

Then i type in root-password and it says
[color=green]

(repair filesystem) #
[/color]

I have to say that I am beginner working with SLES 2011. The restore was
on different hardware.
You have any ideas to help me? thank you very much.[/color]

Since there is no SLES version 2011 what do you mean by “SLES 2011”?
Perhaps you mean SLES 11? Please post the output from “cat
/etc/*release” so we can see which version and Service Pack (if any)
you’re using.

HTH.

Simon
SUSE Knowledge Partner


If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.

Hello,

at the moment I do a restore again. Think I made something wrong when I tried to use system repair option with install DVD.
Yes, it is Version 11, sorry.

I will give you the output on Friday. Thank you so far.

Hi Stefan(_1304),

I you still run into that message, check the lines above where the output from fsck is listed, to identify which file system(s) are affected. To manually repair those, run "fsck -f " and if I may recommend so, run it again until fsck reports it has nothing to fix. Also, I’d run that on any other file system at least once.

Was the image created on a running system? Then chances are high you won’t have too much trouble after repairing. If the image was taken from a shut-down system, it’d be interesting why there are such problems.

Regards,
Jens

Hi Jens,

it was an offline image via DriveSnapshot.
http://www.drivesnapshot.de/

when I ran fsck -f several times, I geht erros like this:

I restored a correct running system and do not know why there are errors. the only changes are the different hardware.

when I try df -ah, i only get /dev/mapper/system-root_lv

when I try fdisk -l, I can see all the partitions correct…

Hi Stefan,

[QUOTE=stefan_1304;18234]when I ran fsck -f several times, I geht erros like this:

I restored a correct running system and do not know why there are errors. the only changes are the different hardware.

when I try df -ah, i only get /dev/mapper/system-root_lv

when I try fdisk -l, I can see all the partitions correct…[/QUOTE]

you seem to be running your system with logical volume management (LVM) - if you look at the /etc/fstab entries, are the partitions in question really mounted via partitions, or via (/dev/mapper/system-*)LVM devices?

If in doubt, please post the entries from /etc/fstab here. Unless these are via i.e. “LABEL=somelabel” in column one, you should run fsck against the device entry listed in that first column.

If you’re totally confused by all this, it’d be helpful to also paste (in [CODE] blocks) the output of

  • fdisk -l
  • vgscan -v
  • vgdisplay system (and for any other VG found via vgscan)
  • ls -l /dev/mapper
  • cat /etc/fstab

plus the info which file system you’re trying to repair.

Regards,
Jens

Thanks. I will answer everything on friday
Merry xmas :slight_smile:

[QUOTE=jmozdzen;18235]Hi Stefan,

you seem to be running your system with logical volume management (LVM) - if you look at the /etc/fstab entries, are the partitions in question really mounted via partitions, or via (/dev/mapper/system-*)LVM devices?

If in doubt, please post the entries from /etc/fstab here. Unless these are via i.e. “LABEL=somelabel” in column one, you should run fsck against the device entry listed in that first column.

If you’re totally confused by all this, it’d be helpful to also paste (in [CODE] blocks) the output of

  • fdisk -l
  • vgscan -v
  • vgdisplay system (and for any other VG found via vgscan)
  • ls -l /dev/mapper
  • cat /etc/fstab

plus the info which file system you’re trying to repair.

Regards,
Jens[/QUOTE]

Hi Stefan,

[QUOTE=stefan_1304;18239]Thanks. I will answer everything on friday
Merry xmas :)[/QUOTE]

happy holidays to you (and anyone else reading this thread, too)! I may be online only a few moments until early January, so please don’t feel forgotten if I won’t be responding immediately :slight_smile:

Regards,
Jens

So hello again, I hope I can help you with the information.

Because it would be much to type in, I hope sp, that photos are quite ok too. thank you.

ls -l /dev/mapper
http://s14.directupload.net/file/d/3484/gdtql9t9_jpg.htm

cat /etc/fstab
http://s14.directupload.net/file/d/3484/6wtgpw36_jpg.htm

[QUOTE=jmozdzen;18235]Hi Stefan,

you seem to be running your system with logical volume management (LVM) - if you look at the /etc/fstab entries, are the partitions in question really mounted via partitions, or via (/dev/mapper/system-*)LVM devices?

If in doubt, please post the entries from /etc/fstab here. Unless these are via i.e. “LABEL=somelabel” in column one, you should run fsck against the device entry listed in that first column.

If you’re totally confused by all this, it’d be helpful to also paste (in [CODE] blocks) the output of

  • fdisk -l
  • vgscan -v
  • vgdisplay system (and for any other VG found via vgscan)
  • ls -l /dev/mapper
  • cat /etc/fstab

plus the info which file system you’re trying to repair.

Regards,
Jens[/QUOTE]

and when I try fsck:

http://s1.directupload.net/file/d/3484/yhtmvljd_jpg.htm

I tried every task you wrote on the server from which I made backup and the output ist the same like on the server I restored.

live system is running, restored server is not running. hope you can help. thank you very much.

Hi Stefan,

the trouble is that your partitions are referenced by disk id in fstab - as the new server has a different physical disk, its disk id is different, too.

Use an editor of your choice (i.e. “vi”) to change the “/dev/disk/by-id/scsi-3600…-partX” device entries in /etc/fstab of the restored system to “/dev/sdaX” (i.e. to “/dev/sda5 /hana/shared ext3 acl,user…”). Of course, you may as well change the entry to the id of the current SCSI disk (see “ls -l /dev/disk/by-id” on the new server).

You may want to manually run “fdisk -f /dev/sda1” (5,6,7,8,9), until no more errors are reported, before rebooting the server.

Regards,
Jens

Hi Jens,

thanks for your excellent answer. that sounds fine.
The disks of the new server are others.

When I understand it right, it is enough when it says in the fstab where every partition is mapped?
For example:
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
/dev/sda5 /hana/shared ext3
/dev/sda6 …
/dev/sda7 …
/dev/sda8 …
/dev/sda9

I can delete the information in the fstab with SCSI disk ID and so on because I does not need it there? ist that correct? thank you very much.

king regards.
Stefan

[QUOTE=jmozdzen;18281]Hi Stefan,

the trouble is that your partitions are referenced by disk id in fstab - as the new server has a different physical disk, its disk id is different, too.

Use an editor of your choice (i.e. “vi”) to change the “/dev/disk/by-id/scsi-3600…-partX” device entries in /etc/fstab of the restored system to “/dev/sdaX” (i.e. to “/dev/sda5 /hana/shared ext3 acl,user…”). Of course, you may as well change the entry to the id of the current SCSI disk (see “ls -l /dev/disk/by-id” on the new server).

You may want to manually run “fdisk -f /dev/sda1” (5,6,7,8,9), until no more errors are reported, before rebooting the server.

Regards,
Jens[/QUOTE]

Or I will do ls -l /dev/disk/by-id, there I can have a look which are the correct entrys for SCSI and put them instead of the wrong entrys in fstab?
this way would be faster?

Hi Stefan,

[QUOTE=stefan_1304;18284]Or I will do ls -l /dev/disk/by-id, there I can have a look which are the correct entrys for SCSI and put them instead of the wrong entrys in fstab?
this way would be faster?[/QUOTE]
unlike in your other response, you here state correctly that you need to replace the entries with the correct values.

Your disk device(s) (and partitions, too) can be referenced by different names in Linux. I.e., for decades the typical name would have been /dev/sda for your first SCSI disk (/dev/sda1 being the first partition on that disk), but what if you have more than one disk? They’d be sda, sdb and so on, but in which order? Therefore, additional names were added, like “use disk ID”, “use device location on system bus”, “use file system label” and so on.

In your case, the partitions to mount are referenced by device id plus partition number. As the device ID has changed, you’ll either need to adjust that to the correct new value or use some other reference (hence my earlier suggestion to use /dev/sda*, as your server currently has only a single disk, no question on who’s who will come up… and it’s easier to type for you :smiley: )

You may not delete the entries in /etc/fstab, but need to correct them. These entries tell the system which file systems on which device (partition) to mount.

Regards,
Jens

Hi Jens,

this moment I tested the changes in fstab and it seems to be good.
I changed for every entry the scsi ID. When I set df -ah I see everything correct.

Another problem is that I do not have any GUI after starting. I tried to do startx and it does not work.

When I try it said

is it a wrong command or are there any other commands left to have GUI?

So far thank you so much for solving the first error.

King regards.
Stefan

[QUOTE=jmozdzen;18285]Hi Stefan,

unlike in your other response, you here state correctly that you need to replace the entries with the correct values.

Your disk device(s) (and partitions, too) can be referenced by different names in Linux. I.e., for decades the typical name would have been /dev/sda for your first SCSI disk (/dev/sda1 being the first partition on that disk), but what if you have more than one disk? They’d be sda, sdb and so on, but in which order? Therefore, additional names were added, like “use disk ID”, “use device location on system bus”, “use file system label” and so on.

In your case, the partitions to mount are referenced by device id plus partition number. As the device ID has changed, you’ll either need to adjust that to the correct new value or use some other reference (hence my earlier suggestion to use /dev/sda*, as your server currently has only a single disk, no question on who’s who will come up… and it’s easier to type for you :smiley: )

You may not delete the entries in /etc/fstab, but need to correct them. These entries tell the system which file systems on which device (partition) to mount.

Regards,
Jens[/QUOTE]

GUI is now working too.

I tried with this commands:

Hi Stefan,

sounds like you got this up&running yourself - it most probably had to do with a change in video hardware. If any problems remain, please open a new thread for those, to keep threads focused on a single problem :slight_smile:

If your machine is up&running now, they Hey! You made it a problem of 2013 - 2014 may now come!

A happy new year to you all,

Jens

Sorry, that I am writing again. I tried to restore the server once again on the same hardware, but it does not work again.

I do not have the screen with

when I start normal, there I have:

it does not matter if I choose yes or no, the screen comes again the next boot.

I tried to do automatic repaid with the installation DVD with no success.
When I boot with installation DVD and choose rescue mode, I can logon with user root.

rescue login:
root

when I execute fdisk -l I can see every partition.

file \etc\fstab is empty and when I fill it with the correct partitions the system do not seems to remember it.

and when I look in /dev there is no folder called system. is there any way where I do not have problems like this when I restore SLES 11 on another HW?

can anyone help? thank you.