On 03/29/2014 02:04 AM, susercius wrote:[color=blue]
Ciao to everybody from Italy!
The title I’ve used could be read as a stupid one. Ok, it could be,
but…
I’ve spent a lot of time to search the web for “what to do if a Software
Raided disks (Raid1) have problems”? Meaning that I’m not sure about the
actions to be performed for data recovery in case of a disk failure.
In depth, the configuration I would use is:
/dev/sda = 1 TB (or more). /dev/sdb same as sda.
/dev/sda1=12GB with all the stuff needed to work (kernel, boot and so
on). This will be the boot disk.
/dev/sda2 (1TB-12GB) as RAID1.
The same RAID configuration will be applied on sdb (1TB-12GB) The first
12GB on sdb1 could be empty or could be a mirrored image of sda1 via
“dd” command…
Then what to do if sda1 (or sdb1) goes to hell?
All my data (/dev/md0) are still available or should I kill myself?
A friend suggested to avoid the use of RAID and switch to LVM along with
a solid NAS for keep data saved.
ANY answer is really welcome.
Thanks to everybody and forgive my poor english.[/color]
Usually software RAID1 is used where no reliable storage alternative is available.
Also, usually it means the drives are not hot pluggable. This difficulty is
that replacing the failed drive in the software raid requires a power down,
replace, and power up.
Cheap NAS is a possible answer for “reliability”… but it’s very slow. Most
supposed gigabit NAS’s do maybe 200Mbit on a good day (ok if you don’t want more
than 10-20MB/sec). There are good NAS alternatives out there, but they will be
pricey. Many of those cheap NAS solutions do offer some kind of RAID (which
many times is software style RAID behind the scenes) and the ability to hot
swap/replace failed drives.
While Linux software RAID will allow any block device pretty much, including
individual partitions, you probably can see the difficulty. Even if you do
mirror root and boot… the host won’t simply “work” if the primary drive fails,
you’ll have to do a bit of work to make sure things boot ok off the other drive.
It actually helps to build a test box and see the work required for yourself.
So… for full system reliability, helps if all the storage is RAID through and
through. But most people can live with just their crucial data on a RAID
subsystem or NAS… so if the system fails, they just have to somehow rebuild
the base system and then remount their protected crucial data that comes off
some kind of reliable storage.
So… software RAID is useful. And yes, your md0 on a RAID1 will continue to
work if you lose a drive (and you don’t have the root/boot problem mentioned
earlier).
There are cheap RAID1 subsystems, even internal subsystems, you can buy that
mirror at the device level rather than at the OS level (that is, they are OS
agnostic). But they usually come with a price tag and may also have other
requirements that make it more difficult to install/support. But I’ve used them
(e.g. https://www.accordancesystems.com/)
Some options:
-
Internal RAID1 Subsystem
Pros: Mirrors whole drive at the device level. Doesn’t care about OS used.
Easy to use and easy to replace bad drives. Low overhead, fairly fast.
Cons: Pricey at $300 - $450USD
-
Software RAID1
Pros: Very cheap. Very fast (somewhat depends on RAID level and CPU used).
Cons: Somewhat hard to use. Can be very hard when partitions, boot and root are
involved.
-
HW RAID
Pros: Pretty fast
Cons: Pricey at $300 - 1000+. Firmware support is always limited so it might
not work with Linux forever.
-
NAS
Pros: Pretty easy to use, generally doesn’t care too much about OS used.
Cons: Very pricey for a good ones, moderately pricey for cheap ones. Most
beneficial to data areas and not the OS drive. Affordable units (<$500) are
usually very slow (10-20MB/sec).
-
SAN
Pros: Very fast. Very flexible and reliable.
Cons: Extremely expensive for good performance.
My favorite option for people on a budget wanting very reliable RAID1 for all
data is to recommend a RAID1 internal storage subsystem (noting that most OEM
desktop will have trouble housing some solutions).
I currently run a HW RAID at home off a high end Adaptec RAID controller. At
work we use mostly SAN (though it’s just gigabit iSCSI). I have used (that is,
designed and deployed) everything from 8Gbit FC SAN and 10GbE NAS to the
mentioned Accordance internal subsystems to pure Linux software RAID.