|Rich Mingin (PLUG) on 21 Aug 2016 21:19:40 -0700|
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
|Re: [PLUG] RAID6 or RAID5+HS?|
When writing large sequential I/O from Myth, that is likely to interfere with the VM I/O
as every IO gets 'blended together' into one high bandwidth random I/O stream, which
most storage has difficulty with. So try out performance of Myth + VM's together
before committing to an architecture.
The latest FOSScon stressed the importance of putting your important data on ZFS because
of the reliability of real checksums. ZFS also coalesces write requests in RAM before
writing them to diks for more sequential writes ( the SSD SLOG is just for power failure
If you can dedicate hardware to storage, then FreeNAS is a good low-cost
solution. so your main Linux server would only need a boot disk. Ubuntu with
ZFS wont easily run from USB drive, so you lose more storage.
There is still Windows requirements with vSphere 6, as the browser needs
flash , and that is broken /old everywhere in Linux. The next
vSphere release, however, may have much less dependence on Windows. ESXi by itself
can't even do snapshots, the entry level vSphere / vCenter license is
only $500 ( VMware Essentials )
100+ Vm's on Workstation, that has got to be a record.
----- Original Message -----
> From: "Vossen JP" <email@example.com>
> To: "Philadelphia Linux User's Group Discussion List" <firstname.lastname@example.org>
> Sent: Sunday, August 21, 2016 4:01:37 PM
> Subject: [PLUG] RAID6 or RAID5+HS?
> Semi OT but my argument is that the OS going on the hardware is
> Debian... :-) Note, Debian Jessie requires
> "firmware-bnx2_0.43_all.deb" for the NICs on the R710. I haven't tested
> the R720 yet.
> I want to rebuild my VM server and the hardware I'll probably use is a
> Dell PE R710 with 6x3T drives and a PERC H700 (2.02-0025) . I can do
> RAID6 or RAID5+hot-spare, and either way I should get about 11T out of
> it. All are 7200RPM SATA disks.
> As I understand it, either RAID6 or RAID5+HS can handle losing 2 disks, but:
> * RAID6 has a write penalty for the extra parity block
> * Hot-spare has a gap while the array rebuilds onto the hot-spare
> The use-case is my main VMware server (Debian Jessie + LXDE + VMware
> Workstation 10.x . The critical VM is my main "services" server
> (DNS, DHCP, file&print, etc., also Jessie) and the next important one
> will be my MythTV backend (Mythbuntu 14.04), once I virtualize it .
> Everything else is just test VMs and whatever. I do not backup the
> contents of the MythTV server because it's too big, but everything else
> is backed up in a few different ways (BackupPC, BoxBackup, rsync). I've
> got 1.5T in VMs and 3T in MythTV, so 11T is lots of room.
> I don't do a lost of disk-intensive things. Probably MythTV recording 2
> shows at once while playing a third is the worst, but I really don't
> have a good sense of the relative load and capacities of all the parts
> This will be most of my eggs in one basket, so I'd really like tolerance
> for 2 disks to fail close together. So...RAID6 or RAID5+HS? Any other
> random thoughts?
>  I might also be able to use an R720 with 8x4T drives, but that has 1
> bad drive already and I may end up using that for a $WORK thing. It was
> drawing 150-180 watts during a RAID init, but I'd like to re-test with
> an OS with CPU scaling installed and running.
>  I know that in theory I could use ESXi, but:
> 1) I have 100+ VMs with many snapshots, and I don't know of any way to
> move those VMs from Workstation into ESXi without:
> 1.1) doing it manually and
> 1.2) losing snapshots (show stopper)
> 2) The last time I tried using ESXi (5.x IIRC) I found it impossible to
> do anything useful in it
> 3) I do not have, nor will I have, any Windows involved in any VM
> control or management, or anything important
> Those last two may have changed since newer ESXi has the web GUI and not
> the fat Windows client. But the show-stopping snapshot issue remains.
>  MythTV is currently on a PE2950 with 6x1T drives in RAID5, but the
> unit is drawing about 300 watts. My current R710 with 6x1T RAID5 is
> drawing about 140 watts, so I really wish I'd bothered to put the
> kill-a-watt on them sooner. All have dual power supplies so the test is
> easy (and yes, I unplugged the non-kill-a-watt supply for the test).
> -- ------------------------------
> JP Vossen, CISSP | http://www.jpsdomain.org/ | http://bashcookbook.com/
> Philadelphia Linux Users Group -- http://www.phillylinux.org
> Announcements - http://lists.phillylinux.org/
> General Discussion -- http://lists.phillylinux.org/
"Between subtle shading and the absence of light lies the nuance of iqlusion..." - Kryptos
Lee Marzke, email@example.com http://marzke.net/lee/
IT Consultant, VMware, VCenter, SAN storage, infrastructure, SW CM
+1 800-393-5217 office +1 484-348-2230 fax
+1 610-564-4932 cell sip://firstname.lastname@example.org VOIP
Philadelphia Linux Users Group -- http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/
General Discussion -- http://lists.phillylinux.org/
___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug