Matthew Rosewarne on 26 May 2007 20:56:37 -0000


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Linux (Debian) and raid


On Saturday 26 May 2007 13:03, Doug Crompton wrote:
> Using Raid 1, lets assume SW raid, and having the following partitiions -
>
> /boot, /root, /swap, /usr, /home, /lib, and maybe /opt (does Debian use
> /opt?) how would one setup raid?
>
> Does/should swap get included? Do all other partitions get included?
> Doesn't the system have to boot outside of raid before in recognizes a
> raid partition?

To put this thread to rest, let's walk through the process.  I use raid 1 with 
LVM on a number of servers and it's quite easy to set up.  It's really not as 
complex as you make it out to be.

*I'll assume here that your drives are identical, not that it matters much.

So, imagine now that you're in the Debian installer, when asked, choose to 
partition manually.  You're now looking at the partitioner...

		1. Simple setup
If you only plan on having one big partition for "/" (similar to what you were 
doing on Windows), there's no need for LVM and the rest.  That said, LVM is 
wonderful; I never set up machines without it anymore,

On each drive, make one swap partition, later on these should be set to the 
same priority in "/etc/fstab", so the kernel will interleave them (somewhat 
similar to a RAID 0 setup).  The rest of each disk should be another 
partition to hold the RAID 1, make sure these partitions are set to be 
bootable.

Now choose to configure software RAID, and make a new RAID 1 device with the 
two RAID partitions.  The software RAID device will now show up in the 
partitioner, format it with your filesystem of choice, and use it as "/".

		2. LVM setup
If you want to have multiple partitions and swap all mirrored on RAID, you 
need to use LVM.

	A. Boot

On each drive, create a primary partition, these only need to be big enough to 
hold the files needed to boot (bootloader, kernel, initramfs), so 25-35 MB 
should be fine.  Use these partitions for software RAID and set them to be
bootable.

Now choose to configure software RAID, and make a new RAID 1 device from these 
two small partitions.  A software RAID device will now show up in the 
partitioner.  Format this RAID device with ext3, use it as "/boot".

Why make this a RAID?  Well it lets the system write the boot files to both 
disks, but the bootloader (which doesn't understand RAID) will just use one 
of the partitions directly, which is fine since it doesn't write anything.

	B. Swap

You have two options for your swap, you can either lump them in with the rest 
of your partitions in the LVM, or you can keep them separate.  The only 
reason to keep them separate is if you want them *not* to be mirrored, so you 
have twice the amount of swap space.  If you do keep them separate, you 
should have the kernel interleave them (somewhat similar to a RAID 0 setup) 
by setting them to the same priority in "/etc/fstab" later on.

If you're keeping them separate, add a swap partition now on each drive.

*LVM doesn't make any real difference in the speed of swap, the general rule 
is that if your machine regularly needs to swap heavily for its usual tasks, 
then buy more memory.

	C. LVM

The rest of each disk should be one big partition, used for software RAID.  
Now choose to configure software RAID and make a new RAID 1 device with these 
partitions.  The software RAID device will now show up in the partitioner, 
use it for LVM.

Now choose to configure LVM.  Make a new volume group with that software RAID 
device; I usually call mine vg0, vg1, and so on, but you can call it whatever 
you like.  Now add logical volumes for "/", "/home", "/var", and whatever 
else (and swap if you decided not to keep it separate).  On a desktop 
machine, I usually just make a logical volume for "/" that is somewhere 
around 10GB (depending on how much I want to install), and a logical volume 
for "/home" (so if I overstuff my home directory, the rest of the system 
won't run out of space).  Often I will leave 1 or 2 GB of free space on the 
volume group, in case I find I need to increase the size of one of the 
logical volumes later on.

> What is confusing to me is that in my Intel MB driven raid the whole drive
> is part of the array. As far as the BIOS is concerned it is looking at one
> drive. In Linux, using SW raid, would not one drive have to boot linux and
> then the raid array is established? If this were the case then it is not
> truely raid because if that one drive failed it would not boot.

Well, the "RAID" on your motherboard is fake, as others have pointed out.  All 
the work is done in software by the Windows driver, just like the nefarious 
winmodems of old.  You should disable the "RAID" feature in the BIOS (it's 
worthless) and use Linux's software raid instead, which is probably faster 
and certainly more disaster-proof.

%!PS: RAID is *NOT* a substitute for backup!!!  RAID is simply for 
fault-tolerance, so that your machine can stay up in the event of hard disk 
failure.  If you're relying on RAID to save your data, you're going to get a 
very unpleasant surprise.

%!PPS: Please do not go on long ranting tirades on how Linux is not ready 
for "the masses" if your reasoning is something like RAID, which "the masses" 
would never conceivably have to deal with.  Advanced matters such as RAID are 
only ever handled by expert users or OEMs, never by your grandma.

Attachment: pgp5koHn6m9Z1.pgp
Description: PGP signature

___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug