|Randy Schmidt on 22 Sep 2006 16:04:31 -0000|
I'm the highpoint 1640. The individual drives do show up when I first boot without loading the module. I have gone back and forth on whether I should do a software raid or a hardware raid. I went about setting up a software raid first but ultimately settled on doing it the hardware way. I like that the hardware will tell me when there is a problem with one of the drives.
When I had tried software raid, I used mdadm and went looking for tutorials/guides. I followed the steps and everything worked until I rebooted. Also, I couldn't find good information on the full lifecycle of a software raid. What happens when a drive fails? how do I make it start up when the system is rebooted?
Anybody have words of wisdom on pros/cons of software v. hardware raid?
On 9/22/06, Will Dyson <firstname.lastname@example.org> wrote:
On 9/20/06, Randy Schmidt <email@example.com> wrote: > Hi all: > > I am currently putting together a RAID5 with 4 300 gig disks. I > installed everything, created the raid in the RAID card bios, zeroed > it out, compiled the drivers in ubuntu 6.06, inserted the module, and > created a partition that was the whole drive (900 gigs). Here are some > issues I am having: > > 1. I go to create the filesystem with "sudo mkfs.ext2 -j /dev/sdb1" > and it starts to write the inode tables like normal, but it takes > forever! it got about 1/6 of the way through in ~24 hours. I didn't > think this was normal since it takes approximately 5 minutes for a 300 > gig drive. The machine it is on is a P4 2.4 ghz processor with 1 gig > of ram.
-- Randy Schmidt firstname.lastname@example.org 267.334.6833 ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug