Rich Freeman on 26 Apr 2016 11:00:00 -0700

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] [plug-announce] TOMORROW - Tue, Apr 19, 2016: PLUG North - "Linux Containers" by Jason Plum and Rich Freeman (*6:30 pm* at CoreDial in Blue Bell)

On Tue, Apr 26, 2016 at 12:07 PM, Keith C. Perry
<> wrote:
> "Both zfs and btrfs redundantly store ALL metadata (I know btrfs even
> does this by default on a single disk, but with multiple disks it
> ensures redundant copies are on different disks), and everything is
> checked on read."
> That is my understanding as well.  CRC's are currently only applied to the metadata.

Both zfs and btrfs checksum ALL data stored on disk.  Any mismatch
will trigger an attempt to recover using redundant data, or pass along
an error if recovery is not possible.  The only way a silent
corruption won't be at least detected by zfs/btrfs is if it causes a
hash collision (I doubt the hash function is super-strong, but it is
probably unlikely for an error to sneak through if somebody isn't
deliberately tampering).

By default on a single disk btfs will store one copy of data and two
copies of metadata.  On multiple disks it will use whatever mode you
set, such as raid1 for both, or raid5, or whatever.  You could in
theory raid1 your metadata and not have redundant data, which would
obviously let you store a lot more, but with a lot less fault

In raid1 mode though btrfs won't ever lose anything if you have either
a silent or hard failure on any single device.

Well, that's assuming it doesn't eat your data just because it has the
munchies.  That's the thing with experimental filesystems.  :)

I keep a full rsnapshot backup of my btrfs filesystems on ext4 (not
even trusting btrfs send).  I haven't needed it in quite a while now,
but I did run into some kind of bug last week (first time in months)
which forced me to boot back to a 3.18 kernel, mount the filesystem,
and then just let it sit for a while and clean up the log.  I think
the trigger there was a really high level of write activity combined
with a bunch of snapshot deletes.  It tends to confirm my suspicion
that 3.18 is probably still the most stable experience on btrfs right
now, though 4.1 has been fine for the most part.  I wouldn't go around
telling people to run their server farms on it just yet.

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --