sean finney on 25 May 2007 06:39:35 -0000 |
hi doug, first, to follow up on mark's comment: On Friday 25 May 2007 06:40, Doug Crompton wrote: > On Fri, 25 May 2007, Mark M. Hoffman wrote: > > 1) Most onboard RAID controllers do most of the work in software anyway. > > > > 2) If your board dies, you'll have to replace it with one that has that > > same controller. With software RAID, any board that can connect to the > > disks can be used in a pinch. i have to say i agree here. unless you're running a really high-performance server (and maybe not even then, istr benchmarks putting md not to far from some of the "hardware" implementations), i'd say software raid is a better choice. one more reason that mark didn't bring up is the tools and reliability of software raid vs hardware raid. typically with hw raid you don't see the physical devices, which means you can't do things like running smartd to query their status... so you only know about a failed disk after it has failed. > Well not sure what happens in linux but in windows I have experience with > thie. My board did die and I needed to get info to a new system. All I did > was attach one of the raid 1 mirrored drives to the new system. It saw it > as a standalone drive and I was able to read from it fine. i wouldn't rely on that feature to work across all raid systems... > > I've done software RAID w/ CentOS. I set it up like this: > > > > /dev/hda1 and /dev/hdc1 => /dev/md0 (raid1) => /boot > > > > /dev/hda2 => swap > > /dev/hdc2 => swap > > > > /dev/hda3 and /dev/hdc3 => /dev/md1 (raid1) => /dev/VolGroup00 (LVM) > > > > The remaining partitions are allocated on the LVM volume. > > I was trying to avoid the complication of LVM, etc. I just want ext3 or > equiv partitions and mirrored raid. i'd suggest you give it a second shot. i was really late jumping onto the LVM bandwagon, but it's quite nice. plus, the debian installer has built-in support for it. otherwise, you'll need to create a raid device for each partition (istr being able to sub-partition a raid device, but never actually did it). the only gotcha is you can't have /boot (or /, if it's the same thing) on lvm (i hear there's a google SoC to add support for this into grub, but...) so like mark suggested, you can set up one RAID device for /, and a second raid device to act as a physical volume for lvm. sean Attachment:
pgpZ04D5EAg3P.pgp ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|