[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090406060542.GA7376@mit.edu>
Date: Mon, 6 Apr 2009 02:05:42 -0400
From: Theodore Tso <tytso@....edu>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Arjan van de Ven <arjan@...radead.org>,
Jens Axboe <jens.axboe@...cle.com>,
Linux Kernel Developers List <linux-kernel@...r.kernel.org>,
Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: [GIT PULL] Ext3 latency fixes
On Sun, Apr 05, 2009 at 10:01:06AM -0700, Linus Torvalds wrote:
> Of course, different IO schedulers react differently to that whole "sync
> vs unplug" thing. I think CFQ is the only one that actually cares about
> the "sync" bit (using different queues for sync vs async).
It looks liks AS and CFQ both care about the "sync" bit; they both use
rq_is_sync defined in include/lonux/blkdev.h. Deadline apparently
only distinguishes between read and write requests, and not whether
they are considered synchronous or not. The noop scheduler, obviously
also doesn't care. :-)
> The other schedulers only care about the plugging. So the patch
> below really doesn't make much sense as-is, because as things are
> right now, the scheduler behaviors are so different for the
> unplug-vs-sync thing that no sane user can ever know whether they
> should use WRITE_SYNC (== higher priority queueing for CFQ, no-op
> for others) or WRITE_UNPLUG (unplug on all, and additionally higher
> priority for CFQ).
Well, if the deadline scheduler ignores the SYNC bit, it would still
make sense for it to only unplug the queue after for the commit block,
and not for any of the other writes to the journal. Unplugging after
every synchronous write is going to lead to a performance problem,
which I've demonstrated using the fopen/fprintf/fsync/fclose scenario
that Jens pointed me at.
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists