[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111130010604.GD11147@localhost>
Date: Wed, 30 Nov 2011 09:06:04 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Jan Kara <jack@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andi Kleen <andi@...stfloor.org>,
Linux Memory Management List <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/9] readahead: snap readahead request to EOF
> Hmm, wouldn't it be cleaner to do this already in ondemand_readahead()?
> All other updates of readahead window seem to be there.
Yeah it's not that clean, however the intention is to cover the other
call site -- mmap read-around, too.
> Also shouldn't we
> take maximum readahead size into account? Reading 3/2 of max readahead
> window seems like a relatively big deal for large files...
Good point, the max readahead size is actually a must, in order to
prevent it expanding the readahead size for ever in the backwards
reading case.
This limits the size expansion to 1/4 max readahead. That means, if
the next expected readahead size will be less than 1/4 max size, it
will be merged into the current readahead window to avoid one small IO.
The backwards reading is not special cased here because it's not
frequent anyway.
unsigned long ra_submit(struct file_ra_state *ra,
struct address_space *mapping, struct file *filp)
{
+ pgoff_t eof = ((i_size_read(mapping->host)-1) >> PAGE_CACHE_SHIFT) + 1;
+ pgoff_t start = ra->start;
+ unsigned long size = ra->size;
int actual;
+ /* snap to EOF */
+ size += min(size, ra->ra_pages / 4);
+ if (start + size > eof) {
+ ra->size = eof - start;
+ ra->async_size = 0;
+ }
+
actual = __do_page_cache_readahead(mapping, filp,
ra->start, ra->size, ra->async_size);
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists