JP Vossen on 14 Nov 2008 21:56:32 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Your recommendations


> Date: Fri, 14 Nov 2008 17:04:12 -0500
> From: Matthew Rosewarne <mrosewarne@inoutbox.com>
> Subject: Re: [PLUG] Your recommendations
> 
> On Friday 14 November 2008, Ugarit Ebla wrote:
>> I have a server with four 750GB hard drives and 4 GB of RAM and it's
>> capable of RAID 0,1,10,5.  I would like to use LVM with RAID.
>>
>> What is your recommendation for partitioning?
> 
> The great thing about LVM is that you can resize then LV's at will.  Just 
> allocate the minimum you need for your various tasks and leave the rest 
> unallocated.  If you need more space in one of your LV's, just add more.

+1 for LVM coolness.

Before LVM, I used to put everything but swap into one big partition. 
This was arguably a Bad Idea in that it complicated upgrades since you 
couldn't unmount /home and go at it, and there was a chance of running 
out of disk space and crashing.  Personally, I found those risks 
tolerable over the certainty that it was a giant pain to redo things (or 
symlink all of creation) if I guessed wrong on space.  These days, 
upgrades using Debian or Ubuntu are pretty seamless, and disks are so 
darn big you'll never run them out.  (Yeah, yeah, famous last words...)

So I agree with Matthew, fake it, and adjust LVM allocations as needed 
later.

One related point, make sure you *do* leave some space unallocated.  And 
that means you have to do it manually, because if you use the Ubuntu 
alternate CD and pick the LVM option, it's going to allocate all the 
space into one giant PV and LV.  The problems with that are that it's 
all one big partition, and without unallocated space you can't do 
snapshots, which are great for backups.  Here the bug I filed on the 
snapshot issue: https://bugs.launchpad.net/ubuntu/+bug/240813

One other related point, give some thought to your LVM naming, you will 
thank yourself (and hopefully me) later.  Here is what I do (I don't 
always use all of these, this is just a naming example):
	Physical volume = vg_{hostname}
	Logical volumes = lv_swap_1
			lv_root
			lv_home
			lv_var
			...

LVMs layers of abstraction can be confusing, so the idea with this is 
that it's pretty clear what's what and where, even when the disk is 
moved into some other machine for whatever reason.  Maybe other folks on 
the list have better ideas.


So your partitioning question now becomes, which directories should I 
break out into separate volumes?  For your "heavily loaded with mail 
(clamscan,spamassassin,mimedfang), dns, apache, mysql and storage" 
use-case, my example above might be OK.


RAID is another issue.  I used to like hardware RAID5, but disk is so 
cheap these days that I now prefer software mirroring.  It's easier to 
recover, since each half is unobfuscated (unlike RAID where things are 
broken up across disks), and I believe it's faster.  I am 99% sure the 
Linux software RAID+LVM will let you create:
	MD0	MD1
	-----	-----
	750G	750G
	750G	750G
	-----	-----
	1500	1500	<-- software mirrored drives

Now, the catch is /boot, which can't be inside LVM, but which can be 
inside software RAID.  Here is the terse, but step-by-step details of a 
server I set up in VMWare.  It has only 2 physical disks, and it uses 
one large lv_root instead of being more partitioned as above.  But if 
the mailers don't mangle it too much and you can read it, it's a start 
(replace 'hostname' with your machine's hostname if you use my naming):

Partition Method: manual
	Partition 256M as "physical volume for RAID" on both disks, and
		flag bootable
	Use the rest of each disk for "physical volume for RAID"
	"Configure Software Raid"
		Yes, to write changes
		"Create MD devices"
		RAID1 #0, then follow prompts, use 2, 0, then the "boot" partitions
		"Create MD devices"
		RAID1 #1, then follow prompts, use 2, 0, then the other partitions
	RAID1 #0, set as "ext3", /boot, label "boot"
	RAID1 #1, set as "physical volume for LVM"
	Configure the LVM
		Create volume group, name = vg_hostname
		Create logical volume
			lv_swap_1	512M
			lv_root	rest of space - e.g., 2G (to leave some space free for 
snapshots!)
	LVM VG vg_hostname, LV lv_root
		Mount as ext3 /root, label = root
	LVM VG vg_hostname, LV lv_swap_1
		Use as swap area

The final partition setup should look something like this:
	LVM VG vg_hostname, LV lv_root - 7.1 GB Linux device-mapper
	      #1   7.1 GB   f ext3       /
	LVM VG vg_hostname, LV lv_swap_1 - 536.8 MB Linux device-mapper
	      #1 536.9 MB   f swap       swap
	RAID1 device #0 - 254.9 MB Software RAID Device
	      #1 254.9 MB   F ext3       /boot
	RAID1 device #1 - 8.3 GB Software RAID Device
	      #1   8.3 GM   K lvm
	SCSI3 (0,0,0) (sda) - 8.6 GB VMware, VMware Virtual...
	      #1 primary  255.0 MB B K raid
	      #2 primary    8.3 GB   K raid
	SCSI3 (0,1,0) (sdb) - 8.6 GB VMware, VMware Virtual...
	      #1 primary  255.0 MB B K raid
	      #2 primary    8.3 GB   K raid

I find it a pain to compute the values (i.e., extents) when creating the 
logical volumes.  I generally do a tedious trial and error, flipping 
between virtual consoles as needed to see error messages about space. 
If this isn't clear, try doing the above steps in a VM or on a test box 
and you'll see what I mean.  I mentioned that in the bug too.

Good luck,
JP
----------------------------|:::======|-------------------------------
JP Vossen, CISSP            |:::======|        jp{at}jpsdomain{dot}org
My Account, My Opinions     |=========|      http://www.jpsdomain.org/
----------------------------|=========|-------------------------------
"Microsoft Tax" = the additional hardware & yearly fees for the add-on
software required to protect Windows from its own poorly designed and
implemented self, while the overhead incidentally flattens Moore's Law.
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug