Matthew Rosewarne on 27 May 2007 22:02:07 -0000


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Linux (Debian) and raid


On Sunday 27 May 2007 16:38, Aaron Mulder wrote:
> I think there's an interesting point here.  If you have 2 drives
> /dev/hda and /dev/hdc in a software RAID 1 array, is there a way to
> install Grub such that if /dev/hda dies then the machine can boot off
> only /dev/hdc?  I would assume that "normally" Grub would only be
> installed to /dev/hda and would look for an initrd in /dev/hda
> somewhere, such that if /dev/hda failed then you wouldn't be able to
> boot off /dev/hdc -- at least until you connect it to the other drive
> cable, and/or until you reinstalled Grub.

Well, all that would need to be done is run "grub-install" on the other drive.  
At boot time, the line "root (hd0,0)" would need to be altered to say "root 
(hd2,0)" instead.  I don't know of a way to make this automatic, which I 
suppose is a problem with GRUB.  GRUB 2 will probably fix this limitation, 
but there might be a workaround for now (see below).

> Also, in the configurations that use separate (non-mirrored) swap
> partitions on both drives, I would assume that the boot sequence would
> bail if one of the swap partitions listed in /etc/fstab was not
> present, and drop you into single-user mode.

If a swap device fails, I doubt it would drop into single user mode.  I 
believe that would only happen if the boot sequence failed to find a mount 
that was required to boot, like "/".  It would probably just continue with a 
warning.

> Does someone have a procedure that will make it so that if you have
> mirrored drives like in this example, and the first one dies or is
> removed, you can just power up with the second drive only and *no*
> changes and the OS will boot fully (though granted, probably griping
> that the RAID set is broken)?  Or maybe that would never be possible
> because maybe if the RAID set it broken it won't let you boot into a
> regular read/write mode?

Well, when a device in a RAID array doesn't look right, the RAID driver will 
try to reconstruct it using the known good data from another drive.  If it 
can't do that, it will disable the device and issue a warning.  The problem 
with booting comes from the fact that the boot device has to be specified 
explicitly in "/boot/grub/menu.lst", but the mirrored "/boot" partition 
prohibits having a different boot device specified on each drive.  One could 
get around this by not using RAID 1 for the "/boot" partitions, and instead 
try to maintain the boot files on each drive manually, but it's so much 
easier just to use GRUB to change the disk at the boot menu.  
Perhaps "altoptions" in menu.lst could be used to make an alternate rule 
which tells GRUB to boot off of the other drive, that would solve the problem 
in the cleanest possible fashion.  I can give advice on that if needed.

> It does seem that using onboard RAID would avoid a lot of these issues
> -- just with the caveat that you're protecting against hard drive
> failure, not against motherboard/CPU/power supply/etc. failure.  (You
> know, when 5 years later the interfaces for all of these have changed
> and the local stores only carry the latest and greatest...)

Well, like I said in my other message, RAID 1 is just to allow fault tolerance 
when one disk dies, not to save your data.  Now if so desired, one should be 
able to use the onboard fake RAID with a driver, but the additional effort to 
set up Linux software raid is not nearly as difficult as some have made it 
out to be and is almost certainly worth it.

Attachment: pgpHWQ2kEMbCN.pgp
Description: PGP signature

___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug