Lee H. Marzke on 22 Aug 2016 09:12:21 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] RAID6 or RAID5+HS?


See below.


----- Original Message -----
> From: "Vossen JP" <jp@jpsdomain.org>
> To: "Philadelphia Linux User's Group Discussion List" <plug@lists.phillylinux.org>
> Sent: Monday, August 22, 2016 11:40:59 AM
> Subject: Re: [PLUG] RAID6 or RAID5+HS?

> Thanks for all the thoughts so far!
> 
> On 08/21/2016 08:26 PM, Rich Mingin (PLUG) wrote:
> > It's mainly just the two different XOR stripes being computed
> > independently. It's double the overhead of RAID5, since the XORs are
> > completely separated, and should not make use of caches, but the
> > overhead of the XOR operation is a very, very small part of the
> > operational cost anymore.
> 
> That makes sense.  I was getting the "write overhead" from random places
> on the web that I didn't record, but including:
> https://en.wikipedia.org/wiki/RAID
>	"Double-protection parity-based schemes, such as RAID 6, attempt to
> address this issue by providing redundancy that allows double-drive
> failures; as a downside, such schemes suffer from elevated write
> penalty—the number of times the storage medium must be accessed during a
> single write operation."
> 
> 
> On 08/21/2016 08:48 PM, Lee H. Marzke wrote:
>> When writing large sequential I/O from Myth,  that is likely to interfere with
>> the VM I/O
>> as every IO gets 'blended together' into one high bandwidth random I/O stream,
>> which
>> most storage has difficulty with.   So try out performance of Myth + VM's
>> together
>> before committing to an architecture.
> 
> Yeah, that's what I'm wondering about.
> 
> 
>> The latest FOSScon stressed the importance of putting your important data on ZFS
>> because
>> of the reliability of real checksums.    ZFS also coalesces write requests in
>> RAM before
>> writing them to diks for more sequential writes ( the SSD SLOG is just for power
>> failure
>> protection )
> 
> It's XFS on spinning rust right now, no SSDs.  I like the idea of ZFS in
> theory, but I know nothing about how to set it up and manage it and I
> don't have time to learn it right now.
> 
> BTRFS is a "no way in hell" for any number of reasons.  Hell, I still
> use ext3 in places, because it just works.
> 
> 
>> If you can dedicate hardware to storage,  then FreeNAS is a good low-cost
>> solution.   so your main Linux server would only need a boot disk.  Ubuntu with
>> ZFS wont easily run from USB drive, so you lose more storage.
> 
> I've used and liked FreeNAS before, in the lab, at work, but in this
> case breaking out storage won't fly.  The big storage is in the same box
> as the big RAM and CPU...it's all in the R710.  I could maybe run
> multiple R710s but that completely defeats the purpose of consolidating
> down to 1 box for electrical and heat reasons. :-)
> 

I wish there was an easy way to do water cooling.   I now need a hot/cold isle
to vent my half-rack exhaust into the my storage space to make it work with
present basement cooling.

> 
>> There is still Windows requirements with vSphere 6,  as the browser needs
>> flash ,  and that is broken /old everywhere in Linux.  The next
>> vSphere release, however, may have much less dependence on Windows.  ESXi by
>> itself
>> can't even do snapshots, the entry level vSphere / vCenter license is
>> only $500 ( VMware Essentials )
> 
> The web GUI I am using it on the ESXi 6.OU2 itself, at https://<ipa>/ui/
> and I think it's HTML5.  It has the snapshot actions.  And I swear I was
> doing snapshots from the web GUI about a year ago, but I was only a user
> of that system and I have zero knowledge of the guts.  So far in my very
> limited testing, I have not been able to create local users, but a
> combination of local SSH and busy-box, then using Workstation to assign
> the user to a role has worked.  I'm pretty sure that could be done
> locally using the cli tool but I haven't tried yet.

I've heard there is a fling being added ( not for production use )
that allows some control via HTML5,  haven't used it yet.

> 
> (Update) I've been giving this some thought and I may have a fundamental
> gap.  I think that vSphere is the general management thing, and that it
> is a fat client on Windows.  I *know* there is now a web GUI component
> and they are moving to and pushing that.  But I was thinking that was
> https://<ipa>/ui/.  Am I missing that vSphere is actually client/server,
> and I only ever see the fat Windows client part, but there is also a
> Windows server part that, in part, hosts a better web GUI?  I seem to
> recall the some Windows was required for vMotion and other neat tricks
> (though that makes my head hurt).
> 

For licensing and other reasons,  the ESXi unit by itself is very limited,
and even the API's are read-only without at least an 'Essentials' license.
I think deployment from templates is also not an option without vCenter.
Anything useful like HA/vMotion , unfortunately requires Essentals plus.

The preferred vCenter these days is a SuSE Linux appliance, with both vCenter
and Platform services ( certificate services, single sign-on, inventory ) inside
one VM , or spit into two more VM's for larger sites.  You NEED windows to
install the SuSE appliance !, mainly to check input parameters before the OVF
is uploaded.  After that the web client has all features but still requires
a browser on Windows with Flash.   The All HTML5 client is still expected soon
and I can't wait.


HA and vMotion require vCenter and Essentials Plus (although HA can continue to work if vCenter
goes down later ).  The vMotion requrement for vCenter is more a licensing
thing.   

Essentials is about $500 for 6 CPU and vCenter  ( limited to 6 cpu ) (e.g 3 hosts)
Essentials Plus is about $5000 for 6 CPU and vCenter (limited to 6 cpu)
  This also give you the VDP backup solution

Lee

> 
>> 100+ Vm's on Workstation,  that has got to be a record.
> 
> Only a few run at a time, though I have probably been running 8-10 at
> once.  The server has 32G RAM and at least 8 "cores" (I think 2 physical
> CPUs, but multicore.)  But I also have a lot of snapshots for some VMs,
> so arguably the number goes up.  I wouldn't be surprised if little Rich
> has me beat though.
> 
> 
> Thanks again,
> JP
> --  -------------------------------------------------------------------
> JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/
> ___________________________________________________________________________
> Philadelphia Linux Users Group         --        http://www.phillylinux.org
> Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
> General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug

-- 
"Between subtle shading and the absence of light lies the nuance of iqlusion..." - Kryptos 

Lee Marzke, lee@marzke.net http://marzke.net/lee/ 
IT Consultant, VMware, VCenter, SAN storage, infrastructure, SW CM 
+1 800-393-5217 office +1 484-348-2230 fax 
+1 610-564-4932 cell sip://8003935217@4aero.com VOIP
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug