Keith C. Perry on 22 Aug 2016 09:13:52 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] RAID6 or RAID5+HS?


Just to chime in a bit.  I'm sort of agnostic on the RAID 5 or 6 thing.  If you can do RAID 6, do it so you don't have the heavy rebuild penalty- if you have a spinning disk you might has well put it to use.

I personally prefer degraded RAID 0, RAID 10 or RAID0 with DRBD but obviously that would require more resources (space, bays, bare metal, etc...).  I do RAID 5 on my theater lv's for the same reason you do- at least until I upgrade it to larger disks.

ZFS has been mentioned but keep in mind that that really only shines if you have ECC memory- sounds like you probably do if its an Dell PowerEdge model.  'Might be worth a look if you want to try something new(er).

Regardless of disk organization remember to backup anything critical- not a requirement for your use case but I think its worth always saying  :D

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ 
Keith C. Perry, MS E.E. 
Owner, DAO Technologies LLC 
(O) +1.215.525.4165 x2033 
(M) +1.215.432.5167 
www.daotechnologies.com

----- Original Message -----
From: "JP Vossen" <jp@jpsdomain.org>
To: "Philadelphia Linux User's Group Discussion List" <plug@lists.phillylinux.org>
Sent: Monday, August 22, 2016 11:40:59 AM
Subject: Re: [PLUG] RAID6 or RAID5+HS?

Thanks for all the thoughts so far!

On 08/21/2016 08:26 PM, Rich Mingin (PLUG) wrote:
 > It's mainly just the two different XOR stripes being computed
 > independently. It's double the overhead of RAID5, since the XORs are
 > completely separated, and should not make use of caches, but the
 > overhead of the XOR operation is a very, very small part of the
 > operational cost anymore.

That makes sense.  I was getting the "write overhead" from random places 
on the web that I didn't record, but including:
https://en.wikipedia.org/wiki/RAID
	"Double-protection parity-based schemes, such as RAID 6, attempt to 
address this issue by providing redundancy that allows double-drive 
failures; as a downside, such schemes suffer from elevated write 
penalty—the number of times the storage medium must be accessed during a 
single write operation."


On 08/21/2016 08:48 PM, Lee H. Marzke wrote:
> When writing large sequential I/O from Myth,  that is likely to interfere with the VM I/O
> as every IO gets 'blended together' into one high bandwidth random I/O stream, which
> most storage has difficulty with.   So try out performance of Myth + VM's together
> before committing to an architecture.

Yeah, that's what I'm wondering about.


> The latest FOSScon stressed the importance of putting your important data on ZFS because
> of the reliability of real checksums.    ZFS also coalesces write requests in RAM before
> writing them to diks for more sequential writes ( the SSD SLOG is just for power failure
> protection )

It's XFS on spinning rust right now, no SSDs.  I like the idea of ZFS in 
theory, but I know nothing about how to set it up and manage it and I 
don't have time to learn it right now.

BTRFS is a "no way in hell" for any number of reasons.  Hell, I still 
use ext3 in places, because it just works.


> If you can dedicate hardware to storage,  then FreeNAS is a good low-cost
> solution.   so your main Linux server would only need a boot disk.  Ubuntu with
> ZFS wont easily run from USB drive, so you lose more storage.

I've used and liked FreeNAS before, in the lab, at work, but in this 
case breaking out storage won't fly.  The big storage is in the same box 
as the big RAM and CPU...it's all in the R710.  I could maybe run 
multiple R710s but that completely defeats the purpose of consolidating 
down to 1 box for electrical and heat reasons. :-)


> There is still Windows requirements with vSphere 6,  as the browser needs
> flash ,  and that is broken /old everywhere in Linux.  The next
> vSphere release, however, may have much less dependence on Windows.  ESXi by itself
> can't even do snapshots, the entry level vSphere / vCenter license is
> only $500 ( VMware Essentials )

The web GUI I am using it on the ESXi 6.OU2 itself, at https://<ipa>/ui/ 
and I think it's HTML5.  It has the snapshot actions.  And I swear I was 
doing snapshots from the web GUI about a year ago, but I was only a user 
of that system and I have zero knowledge of the guts.  So far in my very 
limited testing, I have not been able to create local users, but a 
combination of local SSH and busy-box, then using Workstation to assign 
the user to a role has worked.  I'm pretty sure that could be done 
locally using the cli tool but I haven't tried yet.

(Update) I've been giving this some thought and I may have a fundamental 
gap.  I think that vSphere is the general management thing, and that it 
is a fat client on Windows.  I *know* there is now a web GUI component 
and they are moving to and pushing that.  But I was thinking that was 
https://<ipa>/ui/.  Am I missing that vSphere is actually client/server, 
and I only ever see the fat Windows client part, but there is also a 
Windows server part that, in part, hosts a better web GUI?  I seem to 
recall the some Windows was required for vMotion and other neat tricks 
(though that makes my head hurt).


> 100+ Vm's on Workstation,  that has got to be a record.

Only a few run at a time, though I have probably been running 8-10 at 
once.  The server has 32G RAM and at least 8 "cores" (I think 2 physical 
CPUs, but multicore.)  But I also have a lot of snapshots for some VMs, 
so arguably the number goes up.  I wouldn't be surprised if little Rich 
has me beat though.


Thanks again,
JP
--  -------------------------------------------------------------------
JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug