Rich Freeman via plug on 4 Jan 2022 16:00:58 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Moving mdadm RAID-5 to a new PC


On Tue, Jan 4, 2022 at 4:50 PM Keith <kperry@daotechnologies.com> wrote:
>
> If you are specifically building or rebuilding an older box for a NAS
> you definitely don't need 128 lanes or even 24 lanes. Realistically, you
> don't have to use an HBA either.

I think that we're talking about two different things which is where
most of our differences are coming from.

If all you want are four spinning disks, then IMO for most uses just
about any PC or a Pi4 with a single GbE will work.  If you're going to
get a lot of IO the network could be a bottleneck, but unless you're
doing nothing but sequential read/write it is going to be hard to get
much more than 1Gbps out of four spinning disks.

I was talking more about what you'd need if you wanted to scale things
up or do IOPS.  If you want to put half a dozen NVMes in a system,
you're not going to be able to do it with most consumer PC hardware.
I realize that isn't what Adam is talking about, and so he doesn't
need to read 90% of my emails in this thread.  :)

> The whole point is to give old hardware a new life.

Honestly, I'd think twice about giving most old hardware a new life
for something that is going to run 24x7.  First plug a kill-a-watt
into it and see how much power it uses.  A lot of old hardware is
pretty inefficient and replacing it with something ARM-based could
very well pay for itself in less than a year.

> > Likewise if you're stacking 4 ports per node you're going to need a
> > big switch, and those aren't cheap either if you want it to be
> > managed.
>
> Actually, you can forgo the fancy switch...

Sure, hence the reason I said, "if."  Really though really large
gigabit switches still aren't that cheap, and if you want to use up
four ports per host and half half a dozen hosts in a distributed setup
then you're going to need a lot of switch ports.  You might still want
a nicer switch if you need to connect it to anything else, since most
cheap switches lack both a 10Gbps uplink and the ability to bond ports
for an uplink.

Again, if you just have one host with 4 disks in it, then you
obviously don't need that many ports.  We're talking about two
different things.

>
> For vertical storage systems this is always going to be biggest issue no
> matter what you do.  This problem simply scales with your solution which
> is why I moved away from very large RAIDs in favor of distributed file
> systems.

The main arguments I see for distributed filesystems for
low-performance storage is the fact that you can use more IO-bound or
low-power hosts that can only handle a couple of drives each without
jumping through hoops.  If you want to plug 24 hard drives into a
single consumer motherboard that is going to be a challenge.  Plugging
24 hard drives into half a dozen Pi4s (or PCs) is no big deal.  Plus
you get full redundancy with consumer-grade hardware - not just for
disk failures but the failure of anything in a single host.



-- 
Rich
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug