K.S. Bhaskar via plug on 4 Aug 2021 12:02:39 -0700


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] Recommendation for NAS with Solid State Drives


The Backblaze data for Q1 2021 (https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2021/) suggests that at least in some applications, SSDs have a lower failure rate than HDDs.

Regards
– Bhaskar

Windows: COVID for Computers


On Wed, Aug 4, 2021 at 1:10 PM Keith C. Perry via plug <plug@lists.phillylinux.org> wrote:
Well, I'm not going to say the slowness is in your head but consider this...

Any network attached storage is going to have several layers your data has to transverse so there are a number of things to optimize for maximum throughput.  You can always benchmark performance with fio and iperf3 (if you can install and run that on the QNAP).

At first glance what I will say is that using SSDs across the board for storage especially in a single device is NOT something that I would not do.  The are more sensitive electrically than spinning rust and when they fail, they fail hard and fast.  Sure you can RAID them but remember RAID doesn't mean you won't have catastrophic failures.  What it means is that if you have a 1 or maybe 2 (if you use a higher redundancy- with only 4 disks I would do RAID 10 long before RAID 6) DISK related failures you'll be able to continue to operate until you replace the disks.  The problem with RAID is that 1) you're at risk until you do that if you detect it in time (and now re-read what I just said about SSDs) and 2) rebuilding degraded RAIDs doesn't always go according to plan.  You need backups and you need to have a backup of the ***entire*** RAID if you are going this route (and you said data survivability is paramount.

(enterprise, we have a new problem- what do I back my NAS up to ???)

Second, you said you don't want to wrap your own.  Unless you tell me you are already running a network bond for the two ports on the QNAP, I would NOT move to SSDs yet.  Chances are if you are seeing performance issues you are running into network bandwidth issues long before you are running into actual storage issues.  If you're flooding both those ports you probably need a 4 x 1Gbs, 1 x 10Gbs or 2x20Gbs network capable device.  You always max out the network first because that is the slowest part of the storage system.  A "slow" HDD is around 80MBs, that's about 640Gps or 64 x 10Gps in raw throughput.  If you only get 25% of that you still would have a 16 x 20Gbs system.  A slow SSD at 200MBs (1600Gps) isn't going to help a network band-limited situation.  This is raw math and the specific workload do matter but that is why you optimize the slowest things first.  It is possible that SSDs might be faster for a single job running on "quiet" network but in my experience those days are over.  Consumer / home networks are pushing current deployments and the 2020 COVID-19 new world reality exposed that better than anything ever.  I spent a lot of time talking a lot of people, gently trying to tell them that their home networks were in need of "improvements"  :D

To give a personal experience... you've probably seen me and others clamor away about LizardFS (or other network storage systems).  I recently upgraded my 3 node cluster from 4 x 1Gbs to 2 x 10Gbs.  For awhile, I was back down on a 1 x 1Gbs network and my cluster could NOT keep up.  It was painfully slow... so slow that I had to reschedule the project until I could have everything down to properly get all servers networked.  I should also mention that reason why I made the move was not that the 4 x 1Gbs deployment was slow but because I wanted to isolate the storage net to a 10Gbs switch.  The main switch is still bonded to the storage switch at 4 x 1Gbs.  I'm not having any problems with this at all (and the new switch with 2 x 10Gbs I want is sold out anyway).

I say all that to say that there is nothing wrong with HDDs and this whole SSD thing has been a bit of marketing of a shiny new thing.  People had to learn this the hard way.  SSDs are not universally better.  They are have their place sure but a 100% replacement for HDDs in durable storage system is not one of them.  The only place where I would use them would be for caching (and that is where they tend to get used in the HPC world), temp space or volatile data.  Anywhere, where I have SSD type storage that I care about I give priority to for backup (i.e. laptops, servers with NVMe) or the SSD storage is secondary.

If you move forward with this, I would implore you to at least have copies of your most important QNAP data elsewhere.



~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Keith C. Perry, MS E.E.
Managing Member, DAO Technologies LLC
(O) +1.215.525.4165 x2033
(M) +1.215.432.5167
www.daotechnologies.com

----- Original Message -----
From: "Thomas Delrue via plug" <plug@lists.phillylinux.org>
To: "PLUG Mailing List" <plug@lists.phillylinux.org>
Sent: Monday, August 2, 2021 10:31:11 PM
Subject: [PLUG] Recommendation for NAS with Solid State Drives

I wanted to tap into the hive-mind and collect some thoughts, do's &
don't, and just general wisdom on NAS'es, and specifically NAS'es that
do not use spinning rust disks.

I currently have a QNAP NAS (TS-431) consisting of some spinning rust
disks. It's working well, but starting to get a bit on the old side and
so in the spirit of "replace it before it makes you replace it", I want
to look into a new one.

I'm happy with QNAP (this is not a device that is accessible from
anywhere else but my local network with no plans to change that) and I
like some of their easy features that make it super easy to - for
instance - back up in (client-side) encrypted form to a cloud storage
provider (off-site) on an automated schedule.

But... the spinning rust disks(*) are slow. Hence my question:

Does anyone have experience and/or recommendations for a good NAS with
Solid State drives? Has anyone done this or am I crazy for even wanting
to do this?

Are there folks running a QNAP device with a properly RAIDed solid state
drive-based array? What are the things to keep in mind, do or
specifically not do?

In terms of requirements, I have the following:

- I'm not interested in "building it myself" at this moment, I prefer a
device in which I stuff raw storage media.

- Data survivability is _paramount_, or in other words: I'm totally fine
"losing 'total usable space'" and having to stuff bigger or more disks
in the device if it means that more individual disks can fail before my
whole thing fails. (I think I set up my device with RAID6 back in the
day but I don't remember 100% - so feel free to tell me this was unwise
or the source of my problems as well)

- In terms of how to access the data: NFS is the only real requirement,
no samba, or whatever. But scp would be nice too...

- A nice UI is ... nice

- An easy mechanism to write things in encrypted form to a cloud service
provider such as AWS Glacier for the "we've had an absolutely major
disaster" kinda deal

- I currently have 4 drives, but I'm open to more.

- Being able to SSH into the device and maybe have it even run some
stuff on cronjobs or in containers would be cool but definitely a lower
priority.

Like I said: I'm fine sticking with QNAP. I just don't know if I'm crazy
for wanting to stuff it with solid state drives (would NVMe be doable?)
instead of spinning rust ones...or whether this is doable at all.
Are there other considerations that I should take into account?

Are there other vendors that I should look at that specifically offer a
device like this?

Thoughts, recommendations, warnings, horror & success stories are all
welcomed!

--
Thanks
Thomas

(*) The spinning rust disks seem slow to me but maybe it's because I set
up 4 disks with RAID6?


___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug