Rich Freeman via plug on 3 Jan 2022 11:33:00 -0800

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Moving mdadm RAID-5 to a new PC

On Mon, Jan 3, 2022 at 1:48 PM Rich Mingin (PLUG) <> wrote:
> Mostly correct. A few quibbles. OP is on an RPi2. It's not gigabit at all, and the 10/100 is on the USB controller's bus.

Sure, but I wasn't talking about his current setup, but his question
about "wondering if it'd make sense to migrate it to a faster Pi."
Obviously the things I said about the Pi4 only apply to the Pi4, and
as I said, "I would share your reservations with any previous Pi

> Other quibble: On RPi4, there's only one USB controller, a Via VL805. It handles the USB 2.0 and USB 3.0 ports.

Oh, interesting.  I thought that the two USB3 ports each had their own
host for some reason.  I can't find any benchmarks on USB3 performance
with both ports simultaneously in-use, so I'd assume that it really
does only provide a single USB3 host's worth of bandwidth.  That would
be enough to handle two USB3 hard drives without much compromise, and
beyond that it could start to be an issue.

> Re: ARM with a ton of DDR4 and I/O options: Not yet, but within a year or so, the M1 Mac Minis should be available at reasonable prices, as they get replaced with newer, shinier Apple products. They have a beefy ARMv8A, 8 or 16GB of very fast ram, and tons of wild I/O options. Should be booting Linux on the metal smoothly by then too. Just something to keep in mind. Too expensive to replace most of the low end SBCs, but as a more powerful central point/coordinator, maybe interesting.

That sounds nice.  How much power do those draw?  Not sure what the
cost will end up being like - Macs aren't exactly reputed for being
cheap.  Obviously something with 16GB of RAM on it will cost more than
something with 2GB of RAM on it, but I'd prefer not to pay a lot for
the stuff I end up tossing when I wipe the hard drive.

The main thing that lead me down the SBC path was the power draw.  I
realized that even cheap used PCs that sit around and pull 100W+ 24x7
actually cost a substantial amount in electricity costs.  If I want
half a dozen just sitting idle most of the time to run a distributed
filesystem that is a lot of power use.  Even if it is a bit improvised
the ARM hardware saves a lot in power - those things tend to have PSUs
that can't even supply more than about 15W, and of course at idle they
use far less.

> If you'd like to do more involved stuff, I still say a refurb/discarded desktop PC would give massively more performance and options. Depends on what you're planning to do.

Yeah, no argument there.  My application is mostly media storage, so I
don't need a ton of throughput so much as capacity.  Even so having
half a dozen of these on the network basically ensures that the
Gigabit LAN is always the bottleneck.

If you want to do IOPS for block storage then heaven help you.  That's
where the server grade hardware shines with dozens of PCIe lanes so
that you can stick a bunch of NVMes in one, or HBAs if you must use
SATA.  Ceph has a bunch of recommendations on their website for
standardized configurations, but they're going to lead to websites
that say "call for quote."

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --