[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090803075933.GI12579@kernel.dk>
Date: Mon, 3 Aug 2009 09:59:33 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Chris Mason <chris.mason@...cle.com>,
Lars Ellenberg <lars.ellenberg@...bit.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, dm-devel@...hat.com,
Neil Brown <neilb@...e.de>
Subject: Re: Why does __do_page_cache_readahead submit READ, not READA?
On Mon, Aug 03 2009, Wu Fengguang wrote:
> On Thu, Jul 30, 2009 at 08:06:49AM +0200, Jens Axboe wrote:
> > On Wed, Jul 29 2009, Chris Mason wrote:
> > > On Wed, Jul 29, 2009 at 11:18:45PM +0200, Jens Axboe wrote:
> > > > On Wed, Jul 29 2009, Lars Ellenberg wrote:
> > > > > I naively assumed, from the "readahead" in the name, that readahead
> > > > > would be submitting READA bios. It does not.
> > > > >
> > > > > I recently did some statistics on how many READ and READA requests
> > > > > we actually see on the block device level.
> > > > > I was suprised that READA is basically only used for file system
> > > > > internal meta data (and not even for all file systems),
> > > > > but _never_ for file data.
> > > > >
> > > > > A simple
> > > > > dd if=bigfile of=/dev/null bs=4k count=1
> > > > > will absolutely cause readahead of the configured amount, no problem.
> > > > > But on the block device level, these are READ requests, where I'd
> > > > > expected them to be READA requests, based on the name.
> > > > >
> > > > > This is because __do_page_cache_readahead() calls read_pages(),
> > > > > which in turn is mapping->a_ops->readpages(), or, as fallback,
> > > > > mapping->a_ops->readpage().
> > > > >
> > > > > On that level, all variants end up submitting as READ.
> > > > >
> > > > > This may even be intentional.
> > > > > But if so, I'd like to understand that.
> > > >
> > > > I don't think it's intentional, and if memory serves, we used to use
> > > > READA when submitting read-ahead. Not sure how best to improve the
> > > > situation, since (as you describe), we lose the read-ahead vs normal
> > > > read at that level. I did some experimentation some time ago for
> > > > flagging this, see:
> > > >
> > > > http://git.kernel.dk/?p=linux-2.6-block.git;a=commitdiff;h=16cfe64e3568cda412b3cf6b7b891331946b595e
> > > >
> > > > which should pass down READA properly.
> > >
> > > One of the problems in the past was that reada would fail if there
> > > wasn't a free request when we actually wanted it to go ahead and wait.
> > > Or something. We've switched it around a few times I think.
> >
> > Yes, we did used to do that, whether it was 2.2 or 2.4 I
> > don't recall :-)
> >
> > It should be safe to enable know, whether there's a prettier way
> > than the above, I don't know. It works by detecting the read-ahead
> > marker, but it's a bit of a fragile design.
>
> Another consideration is io-priority reversion and the overheads
> required to avoid it:
>
> readahead(pages A-Z) => READA IO for pages A-Z
> <short time later>
> read(page A) => blocked => find the request that contains page A
> and requeue/kick it as READ IO
>
> The page-to-request lookups are not always required but nevertheless
> the complexity and overheads won't be trivial.
>
> The page-to-request lookup feature would be also useful for "advanced"
> features like io-canceling (if implemented, hwpoison could be its
> first user ;)
I added that 3-4 years ago or so, to experiment with in-kernel
cancellation for things like truncate(). Tracking pages is not cheap,
and since the write cancelling wasn't really very sucessful, I didn't go
ahead with it.
So I'm not sure it's a viable alternative, even if we restricted it to
just tracking READA's, for instance.
But I don't think we have any priority inversion to worry about, at
least not from the CFQ perspective.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists