Rich Freeman via plug on 14 Jul 2021 08:51:24 -0700

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] New Hard Drive Testing Practices

On Wed, Jul 14, 2021 at 9:25 AM LeRoy Cressy via plug
<> wrote:
> In a commercial setting or even a home setting wouldn't it be prudent to have drives on hand if you need to replace one?

That isn't a bad idea.  Most of my drives now are on lizardfs, and
basically free space acts as a hot spare since if a drive fails the
cluster will just redistribute data to the surviving drives.  So, it
is a little less necessary there.  Much as with a hot spare that will
start before I've even read the alert of a drive failure.

This latest failure was on zfs which has a pretty rigid model - moreso
than mdadm.  So, I would have needed a dedicated spare to do this, and
I don't have many drives on zfs now so failures are less common.  That
leads to...

> I personally always do a bad block test of every new drive I buy.  After that test, the drive is put away in my safe until it is needed.  When it is needed you can be fairly sure that since the drive has already been tested that it is already to be installed.

This isn't a bad approach as long as you know you'll install the drive
fairly soon.  Otherwise you're basically wasting warranty period with
drives sitting in boxes.

My zfs RAID has only 4 drives, so they don't fail that often there.
My lizardfs cluster has more drives, but basically already has an
effective hot spare built in.

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --