Rich Freeman on 31 Jan 2015 13:50:04 -0800

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] zfs vs btrfs vs …

On Sat, Jan 31, 2015 at 4:16 PM, R. McIntosh <> wrote:
> I think the instability of BTRFS is being *vastly* overstated in this
> thread. I certainly wouldn't use it on a major production server, but for
> common usage it is completely stable in my experience, having been using it
> as a daily driver for quite some time now. I honestly would consider the
> notion that you need to be *very* careful when using BTRFS to be somewhat
> FUD. You should be as careful as anytime you're doing filesystem work, IMHO.

I've run into my share of btrfs glitches over the last 1.5 years, but
I've yet to actually lose data.  The sorts of issues I've actually run
into involve things like kernel panics and the inability to write to a
disk (typically when it runs out of chunks).  In every case the
filesystem was at least read-only mountable, with no damage to any
actual data.  Still, if this were on a production fileserver just the
issues/downtime/etc would have been a real concern.  I run it mostly
to experiment with it, and also because the snapshotting is REALLY
handy (snapshots are writable as with zfs - so for things like
containers I can just create a snapshot before messing with it, and
the result of a snapshot looks just like the result of a "cp -a"
instruction except it takes no time.  I have a fully daily rsync of my
btrfs filesystems on ext4, so it doesn't keep me up at night.

I'd definitely test the performance though in your case.  One thing
btrfs does struggle with is lots of internal in-place writes, as you
will often find in databases and vm images.  Btrfs tends to fragment
these very heavily, which does impact performance.  You can defrag
them, however, and you should look into that (I think bedup does it
automatically, but at various points this had bugs so I'm not sure
what the status of that is now).  Since ZFS isn't reported to have
these kinds of fragmentation issues I suspect the issue isn't
fundamental to the design and will be worked out eventually, but
certainly that won't be anytime soon.  It probably wouldn't hurt to do
a performance test of your database after subjecting it to numerous
writes, especially with a few snapshots.

I think the important thing is to understand your use case/etc.  If
these are just databases for playing around in, and the loss of one of
them isn't a big deal, then your downside is pretty small, and the
snapshotting could potentially help you out.  That is especially true
if your use case is to create and delete 50 accounts a day and no
account lasts more than a few days - just keep a copy of /etc/passwd
and if the /home partition gets clobbered you can just wipe/restore
and recreate your 300 snapshots or whatever.

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --