[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
What is NSF? Looks like I may have the same issue too. How did you resolve this? I am planning to use the L4 hash which will give me 50/50.
Can't afford the overhead, therefore I am not going with Raid 10. Also, I can't afford to mirror, need to conserve as much space as possible. However, I like your RAID 6 idea. I may take a penalty in writes due to dual parity but since I have 12 drives I think this may be ideal.
I am hoping to setup a SNMP alarm when the drive fails....thats a very good idea.
On Thu, May 22, 2008 at 2:13 PM, Erek Dyskant <firstname.lastname@example.org
In comparison to NFS, GFS is not amazingly easy to manage, but the
> Acutally, I have considered using GFS (the redhat product). What do
> you think of it? Is it easy to manage? I acutally bought a HP product
> which is pretty fast. For the RAID, I am a little concerned.
documentation on it is excellent and if you follow the steps it will
work. The advantage is that you don't run into the NFS quirks, and you
don't have the NFS protocol overhead. Plus, if you're running a
hardware network raid array, the NFS head server(s) aren't a failure
The gotcha on it is that the cluster needs to be able to fence out a
misbehaving node, so all the nodes need to be plugged into an IP based
power-strip (Redhat's got a list of supported ones somewhere on their
site. applies to centos as well).
Modern HP stuff can do RAID 6, where any three drives need to fail
> I am planning to implment RAID5 for 12 drives of 750GB. The problem
> is, if there is a bad drive it would take a very long time to place
> the RAID group as "stable". In other words, it would be in degraged
> mode eventhough we are in the process of replacing the drive and the
> array is rebuilding it self. Any thoughts on an optimal solution? I
> was thinking create 3 seperate RAID 5 volumes on the HW level (if
> thats possible with my HP product).
before data loss occurs. A standard setup for HP hardware is 11 drives
in RAID 6, and one hot spare, giving you a total of n-3 drives usable
You'll have to talk to HP about how substantial the performance
degradation is during rebuilding from a single drive failure.
Also, if you can afford the overhead, consider raid 10, which should
generally give you better performance (but the usable space is n/2)
The way you were talking about with three raid 5 volumes is also a
perfectly viable one. Check your array's datasheet/manual, but any
decent controller should be able to handle three separate arrays. Also
be sure to have either a hot spare or a cold spare available, and set up
some sort of an alarming system to let you know if there's a failure.
Philadelphia Linux Users Group -- http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug