Erek Dyskant on 22 May 2008 11:13:42 -0700 |
> > Acutally, I have considered using GFS (the redhat product). What do > you think of it? Is it easy to manage? I acutally bought a HP product > which is pretty fast. For the RAID, I am a little concerned. In comparison to NFS, GFS is not amazingly easy to manage, but the documentation on it is excellent and if you follow the steps it will work. The advantage is that you don't run into the NFS quirks, and you don't have the NFS protocol overhead. Plus, if you're running a hardware network raid array, the NFS head server(s) aren't a failure point. The gotcha on it is that the cluster needs to be able to fence out a misbehaving node, so all the nodes need to be plugged into an IP based power-strip (Redhat's got a list of supported ones somewhere on their site. applies to centos as well). > I am planning to implment RAID5 for 12 drives of 750GB. The problem > is, if there is a bad drive it would take a very long time to place > the RAID group as "stable". In other words, it would be in degraged > mode eventhough we are in the process of replacing the drive and the > array is rebuilding it self. Any thoughts on an optimal solution? I > was thinking create 3 seperate RAID 5 volumes on the HW level (if > thats possible with my HP product). Modern HP stuff can do RAID 6, where any three drives need to fail before data loss occurs. A standard setup for HP hardware is 11 drives in RAID 6, and one hot spare, giving you a total of n-3 drives usable storage. You'll have to talk to HP about how substantial the performance degradation is during rebuilding from a single drive failure. Also, if you can afford the overhead, consider raid 10, which should generally give you better performance (but the usable space is n/2) The way you were talking about with three raid 5 volumes is also a perfectly viable one. Check your array's datasheet/manual, but any decent controller should be able to handle three separate arrays. Also be sure to have either a hot spare or a cold spare available, and set up some sort of an alarming system to let you know if there's a failure. ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|