Doug Crompton on 25 May 2007 13:00:24 -0000


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Linux (Debian) and raid


Now that I read all this I remembr why I never implemented raid before. I
go with simplicity and this just adds another level of complication. I
like Linux and I like it's capabilities BUT I am not fanatical about it
like many are. I use Windows equally as much or maybe more. linux is in
the background here although it has a big place. I just don't interact
with it as much on a day to day basis as Windows.

For all that makes Linux the powerful OS it is, it also makes it an OS
that will never be a day to day staple for the masses. The file system is
a big part of the benefit and problem of Linux. I mean who wants to deal
with multiple partitions, of different filessystems, and on top of that a
level of raid. For some it is a challenge and at times I find it very
intimidating.

I have had just about every version of windows over the years and have
never used it with more than ONE partiition. That has always worked fine
for me. It is easy to back up and easy to recognize. you know what it is -
the whole drive is one OS. Yes the argument could be made that it is
easier and more fatal to screw up one partition then many but most failure
modes I have experienced are catastrophic across the whole drive.

I guess I have to examine my reason for raid. In Windows raid 1 (at least
with he Intel MB's) works the way I would like it to. It keeps two
identical copies of drives that can function as standalone drives. I could
take either of my drives from my raid array and throw them in a virgin,
non raid, system and they would boot windows. I want to be able to have
this same capability in Linux. Just a simple, write the same thing to both
drives redundancy. I don't think that is possible and the level of
complexitiy of raid under Linux, especially SW raid, if appears to me adds
another failure mode or at least complexity that would have to be dealt
with in a failure.

Maybe automated backups would be a better choice! Certainly raid does not
or should not prclude backups anyhow.

Doug

On Fri, 25 May 2007, sean finney wrote:

> hi doug,
>
> first, to follow up on mark's comment:
>
> On Friday 25 May 2007 06:40, Doug Crompton wrote:
> > On Fri, 25 May 2007, Mark M. Hoffman wrote:
> > > 1) Most onboard RAID controllers do most of the work in software anyway.
> > >
> > > 2) If your board dies, you'll have to replace it with one that has that
> > > same controller.  With software RAID, any board that can connect to the
> > > disks can be used in a pinch.
>
> i have to say i agree here.  unless you're running a really high-performance
> server (and maybe not even then, istr benchmarks putting md not to far from
> some of the "hardware" implementations), i'd say software raid is a better
> choice.  one more reason that mark didn't bring up is the tools and
> reliability of software raid vs hardware raid.  typically with hw raid you
> don't see the physical devices, which means you can't do things like running
> smartd to query their status... so you only know about a failed disk after it
> has failed.
>
> > Well not sure what happens in linux but in windows I have experience with
> > thie. My board did die and I needed to get info to a new system. All I did
> > was attach one of the raid 1 mirrored drives to the new system. It saw it
> > as a standalone drive and I was able to read from it fine.
>
> i wouldn't rely on that feature to work across all raid systems...
>
> > > I've done software RAID w/ CentOS.  I set it up like this:
> > >
> > > /dev/hda1 and /dev/hdc1 => /dev/md0 (raid1) => /boot
> > >
> > > /dev/hda2 => swap
> > > /dev/hdc2 => swap
> > >
> > > /dev/hda3 and /dev/hdc3 => /dev/md1 (raid1) => /dev/VolGroup00 (LVM)
> > >
> > > The remaining partitions are allocated on the LVM volume.
> >
> > I was trying to avoid the complication of LVM, etc. I just want ext3 or
> > equiv partitions and mirrored raid.
>
> i'd suggest you give it a second shot.  i was really late jumping onto the LVM
> bandwagon, but it's quite nice.  plus, the debian installer has built-in
> support for it.   otherwise, you'll need to create a raid device for each
> partition (istr being able to sub-partition a raid device, but never actually
> did it).
>
> the only gotcha is you can't have /boot (or /, if it's the same thing) on lvm
> (i hear there's a google SoC to add support for this into grub, but...) so
> like mark suggested, you can set up one RAID device for /, and a second raid
> device to act as a physical volume for lvm.
>
>
> 	sean
>


"Those that sacrifice essential liberty to obtain a little temporary safety
 deserve neither liberty nor safety."  -- Ben Franklin (1759)

****************************
*  Doug Crompton	   *
*  Richboro, PA 18954	   *
*  215-431-6307		   *
*		  	   *
* doug@crompton.com        *
* http://www.crompton.com  *
****************************


___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug