Hi Toddhunter,
[QUOTE=Todddhunter;21545]I have made some progress but am still having a problem.
Using 2 cloned disks I ran the following command from safe mode
cat /proc/mdstat
It showed /dev/md1 was missing /dev/sda2. I added it with
mdadm /dev/md1 -a /dev/sda2
I can now boot with the two cloned hds but gnome still crashes when I log on.
I am searching for info on repairing the filesystem but have not come up with an answer yet. How do I run a file system check on a Linux software Raid1 volume? Or do I run it on the individual hds?
Any help is appreciated.[/QUOTE]
recovering a RAID1 set after a single disk has failed is rather simple - you replace the disk and add the new disk to the RAID set. In your specific case it looks like you have multiple RAIDs, created from identical partitions on the two disks - nothing unusual either.
The standard procedure would have been
- replace the failed disk by a new one
- create the partitions as required - I assume those where identically-sized disks, so you could “clone” the partition table from the remaining disk, i.e. manually by looking at the partition sizes
- add the newly created partitions to the RAID sets via mdadm - this will trigger the RAID rebuild
In general, mdadm is your friend when handling your RIAD sets. “mdadm --detail ” will show you, amongst other, the devices that make up your RAID set (or that devices are missing) and if the RAID set is consistent or rebuilding:
[CODE]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Sun Apr 4 15:51:07 2010
Raid Level : raid1
Array Size : 96376 (94.13 MiB 98.69 MB)
Used Dev Size : 96376 (94.13 MiB 98.69 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed May 21 09:05:00 2014
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : linux:0
UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
Events : 52
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1[/CODE]
“mdadm /dev/md0 --add ” will add a new device to a RAID set. If that device had never been part of the RAID set, then it’s added as a hot spare… and in case of a degraded RAID set, it will immediately use this new hot spare to replace the failed device.
I’m not so fond of adding a cloned disk: There are markers on the RAID set volumes, if you simply clone them, you might get in trouble since the RAID software may have a hard time telling it’s a newly added volume. It’s better to have it “fresh” if it’s to be treated as “fresh”…
How do I run a file system check on a Linux software Raid1 volume? Or do I run it on the individual hds?
As a rule of thumb, you’ll have to run the FS check on the same device from which you mount the FS. In your case, this is the RAID device. Never ever, no, no, no, work on the individual disk partitions… unless you known exactly what you’re doing and are able to handle the wraith of file system hell all by yourself
Regards,
Jens