|
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
|
Re: [PLUG] Your recommendations
|
> Date: Sat, 15 Nov 2008 13:05:00 -0500
> From: Ugarit Ebla <ugaritebla@gmail.com>
>
> Thank you all for your assistance.
>
> Please bear with me. I'm a complete novice when it comes to RAID and
> LVM.
It's tricky. I learned the little I know from messing with it in
VMware, and documenting what I did.
> Do I first make each partition a RAID then LVM or the other way
> around?
That's why I sent that step-by-step guide in my first reply. I
understand that it looks like so much gibberish, but if you boot up the
alternate installer CD, get into the partitioner, and start doing it, it
will hopefully make sense.
Failing that, could you bring the unit to the PLUG W meeting on Monday
(http://www.phillylinux.org/locations/fnis.html)? We could collectively
take a shot at it during the 7-8 misc. time before the preso. Bringing
printouts of these emails might help too.
> Do I do this on each drive?
Not quite. At a high level you do something like this:
I'm trying to *mirror* so that disks 0+1 are one side of the mirror and
2+3 are the other side. So the setup on 0 == 2 and 1 == 3.
What dsk0 1 2 3
------------------------ ---- ---- ---- ----
physical volume for RAID 256M 256M
physical volume for RAID (rest rest) (rest rest)
"Configure Software Raid"
"Create MD devices"
Create /boot RAID outside LVM 256M 256M
"physical volume for LVM" (rest rest) (rest rest)
(The steps above are where you pair the sides of the
mirrors, 0,1 and 2,3.)
Configure the LVM
Create volume group, name = vg_hostname
Create logical volume
lv_root # / 30 GB
lv_var # /var 100 GB
lv_home # /home 100 GB
lv_tmp # /tmp 5 GB
lv_swap_1 # swap 10 GB
You had 1G for /boot, I used 256M above, and kept /boot outside of LVM.
I think 1G is way overkill, but given disk sizes, whatever. I also
think 5G might be a bit small for /tmp. 10 or 20 might be better, just
in case, since you have the space.
When you finish, your partitioner screen should look something like this
(I'm winging this):
LVM VG vg_hostname, LV lv_root - 30 GB Linux device-mapper
#1 30 GB f ext3 /
LVM VG vg_hostname, LV lv_var - 100 GB Linux device-mapper
#1 100 GB f ext3 /
LVM VG vg_hostname, LV lv_home - 100 GB Linux device-mapper
#1 100 GB f ext3 /
LVM VG vg_hostname, LV lv_tmp - 5 GB Linux device-mapper
#1 5 GB f ext3 /
LVM VG vg_hostname, LV lv_swap_1 - 536.8 MB Linux device-mapper
#1 nnnn MB f swap swap
RAID1 device #0 - 254.9 MB Software RAID Device
#1 254.9 MB F ext3 /boot
RAID1 device #1 - nnnn GB Software RAID Device
#1 8.3 GM K lvm
SCSI3 (0,0,0) (sda) - 8.6 GB VMware, VMware Virtual...
#1 primary 255.0 MB B K raid
#2 primary nnnn GB K raid
SCSI3 (0,1,0) (sdb) - 8.6 GB VMware, VMware Virtual...
#1 primary nnn GB B K raid
SCSI3 (0,2,0) (sdc) - 8.6 GB VMware, VMware Virtual...
#1 primary 255.0 MB B K raid
#2 primary nnnn GB K raid
SCSI3 (0,3,0) (sdd) - 8.6 GB VMware, VMware Virtual...
#1 primary nnnn GB B K raid
It looks like Ubuntu might use GRUB2 after all, so maybe keeping /boot
out of LVM is force-of-habit. But conservative and proven methods
aren't a bad thing. :-)
Good luck,
JP
----------------------------|:::======|-------------------------------
JP Vossen, CISSP |:::======| jp{at}jpsdomain{dot}org
My Account, My Opinions |=========| http://www.jpsdomain.org/
----------------------------|=========|-------------------------------
"Microsoft Tax" = the additional hardware & yearly fees for the add-on
software required to protect Windows from its own poorly designed and
implemented self, while the overhead incidentally flattens Moore's Law.
___________________________________________________________________________
Philadelphia Linux Users Group -- http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug
|
|