Rich Freeman on 9 May 2014 04:07:48 -0700

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] fcheck not nice/ionice?

On Fri, May 9, 2014 at 12:25 AM, JP Vossen <> wrote:
> '/etc/cron.d/fcheck' looks like so:
> #
> # Regular cron job for the fcheck package
> #
> 30 */2  * * *   root    test -x /usr/sbin/fcheck && if ! nice ionice -c3
> /usr/sbin/fcheck -asxrf /etc/fcheck/fcheck.cfg >/var/run/fcheck.out 2>&1;
> then mailx -s "ALERT: [fcheck] `hostname --fqdn`" root </var/run/fcheck.out
> ; /usr/sbin/fcheck -cadsxlf /etc/fcheck/fcheck.cfg ; fi ; rm -f
> /var/run/fcheck.out
> That seems like it should be "nice"'s not.  I have to admit I
> haven't done a lot of homework on this one, but I've Googled a bit. Anyone
> have a clue?

I have no idea what fcheck does, but the command above should put it
in the idle IO priority class.

However, to be honest I've never found ionice to be all that effective
in general.  Back when I used to run lvm+mdadm I think all the
layers/buffering resulted in the prioritization getting lost by the
time data got written to disk.  With btrfs I mentioned in my talk a
tendency to "flush and wait" and this applies just as much to ionice
loads (a batch of writes will all return in near-zero time filling up
the cache/log, and then at the next checkpoint the filesystem grinds
to a crawl while it tries to finish writing out all that data that was

I don't know the internals of how ionice actually works, but I suspect
that there are basically multiple queues involved, and that ionice is
governing access to a queue which isn't really in contention, which
then feeds into some other queue which is in contention but by the
time the request gets that far the kernel has already reported a
successful write and is under the gun to honor its commitment to get
the data on disk in a certain time.

It is a bit frustrating, because if I run a task under ionice it means
I don't care if it takes two weeks to finish - I don't want my other
tasks starved for disk.  And yet, I stare at my drives at 100% usage,
unresponsiveness, high loads, and sometimes mythtv telling me that it
has to drop a recording (even running at ionice -c1 - realtime).

So, I suspect that the real issue here is that ionice isn't really
robustly implemented in the kernel.  However, I'm certainly open to
illumination here if anybody has something to share - I'd love to be
able to use it more reliably.

Philadelphia Linux Users Group         --
Announcements -
General Discussion  --