Keith via plug on 4 Jan 2022 13:50:54 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Moving mdadm RAID-5 to a new PC


On 1/4/22 12:36 PM, Rich Freeman wrote:
On Tue, Jan 4, 2022 at 12:13 PM Keith C. Perry via plug
<plug@lists.phillylinux.org> wrote:
When you consider that, for a NAS, Rich M's idea about an old PC works well because you can get a 4x1Gb NIC and a decent multi-port SATA or proper RAID card for not a lot of money.
No argument on the importance of network performance, especially when
you go to more scalable solutions like Ceph.  The Ceph docs would
basically recommend having two 10Gbps networks in parallel (one for
between-node traffic and one for external traffic).

Note that a NIC that supports multiple ports at 1Gb+ is probably going
to need multiple PCIe lanes depending on the PCIe revision, and of
course HBAs have the same issue.  The big difference between
server-grade hardware and consumer-grade tends to revolve around IO
(and RAM, which I guess is another form of IO in practice).

For example, the latest Zen3 consumer CPUs have 24 PCIe v4 lanes,
while the EPYC versions of Zen3 have 128 lanes.  You can obviously
cram a lot more HBAs/NICs in a board that can give 4-16 lanes to every
slot and they're PCIe v4 besides.

If you are specifically building or rebuilding an older box for a NAS you definitely don't need 128 lanes or even 24 lanes. Realistically, you don't have to use an HBA either.  The whole point is to give old hardware a new life.  A 4x1Gb NIC is going to be an x4 so that isn't a high requirement.  As an example, let's look a this refurb from Microcenter:

https://www.microcenter.com/product/606352/dell-optiplex-3020-sff-desktop-computer-(refurbished)

You can put a quad nic in the x16.  You only have an x1 left in this case but also have USB3.1 that could be used with any number of 4 bay enclosures.  USB3.1 theoretically tops out at 5Gbs but lets just take 20% off of that because we're taking with consumer hardware.  That would put us at 4Gbs of storage throughput and 4Gbs of transfer throughput.  That's a decent balanced hardware build even when the real world numbers are off.

Likewise if you're stacking 4 ports per node you're going to need a
big switch, and those aren't cheap either if you want it to be
managed.  If all your gear is next to each other then that is as far
as the problem goes, but if your stuff is scattered around now you
need to be running more than gigabit networking around.
Actually, you can forgo the fancy switch and just use bond mode 5 (or 6... but use 5 since 6 is more dependent on your card) instead of mode 4 (aka the actual LACP protocol).  I can attest that mode 5 works just fine.  Even though I could have, I actually didn't move my LFS cluster to mode 4 until I put in the 10Gb switch.  I also tried to temporarily going back to 1Gbs from a 4x1Gbs mode 5 bond and it was as nightmare.  I would recommend mode 5 as a step on the way to getting a better switch but its a step you do not have to rush at all.
Ultimately what matters is how much IO you're actually doing.  If
you're just talking about static storage with only a few clients at
most accessing it, then you don't need much hardware at all.  As
Martin pointed out your biggest concern might be rebuild time if you
stick too many drives on one host.  If you're using distributed
storage then rebuild time can be less of an issue since the rebuild is
happening across multiple hosts, so if they're all on the same switch
they can all pass data in and out at 1Gbps between them, and the
drives within a host aren't actually going to get a lot of individual
IO.  That is part of why I can get away with Pis - I have half a dozen
nodes so if a node fails the remaining nodes only have to rebuild
1/5th of a host each between them, and most have multiple drives to
spread that IO across.

For vertical storage systems this is always going to be biggest issue no matter what you do.  This problem simply scales with your solution which is why I moved away from very large RAIDs in favor of distributed file systems.  That said, I think a 4 disk RAID system is great for most home or personal NAS solutions.  4x4Tb system is less that $200 in disk.  I don't know too many regular folks blowing through 16Tb of storage yet.  If that was the case Google Drive, Dropbox and the like would be dead or have much higher free levels  :D

--

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Keith C. Perry, MS E.E.
Managing Member, DAO Technologies LLC
(O) +1.215.525.4165 x2033
(M) +1.215.432.5167
www.daotechnologies.com

___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug