[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200218211228.GF24185@bombadil.infradead.org>
Date: Tue, 18 Feb 2020 13:12:28 -0800
From: Matthew Wilcox <willy@...radead.org>
To: Dave Chinner <david@...morbit.com>
Cc: linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-btrfs@...r.kernel.org,
linux-erofs@...ts.ozlabs.org, linux-ext4@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, cluster-devel@...hat.com,
ocfs2-devel@....oracle.com, linux-xfs@...r.kernel.org
Subject: Re: [PATCH v6 11/19] btrfs: Convert from readpages to readahead
On Tue, Feb 18, 2020 at 05:57:58PM +1100, Dave Chinner wrote:
> On Mon, Feb 17, 2020 at 10:45:59AM -0800, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
> >
> > Use the new readahead operation in btrfs. Add a
> > readahead_for_each_batch() iterator to optimise the loop in the XArray.
> >
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> > ---
> > fs/btrfs/extent_io.c | 46 +++++++++++++----------------------------
> > fs/btrfs/extent_io.h | 3 +--
> > fs/btrfs/inode.c | 16 +++++++-------
> > include/linux/pagemap.h | 27 ++++++++++++++++++++++++
> > 4 files changed, 49 insertions(+), 43 deletions(-)
> >
> > diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> > index c0f202741e09..e97a6acd6f5d 100644
> > --- a/fs/btrfs/extent_io.c
> > +++ b/fs/btrfs/extent_io.c
> > @@ -4278,52 +4278,34 @@ int extent_writepages(struct address_space *mapping,
> > return ret;
> > }
> >
> > -int extent_readpages(struct address_space *mapping, struct list_head *pages,
> > - unsigned nr_pages)
> > +void extent_readahead(struct readahead_control *rac)
> > {
> > struct bio *bio = NULL;
> > unsigned long bio_flags = 0;
> > struct page *pagepool[16];
> > struct extent_map *em_cached = NULL;
> > - struct extent_io_tree *tree = &BTRFS_I(mapping->host)->io_tree;
> > - int nr = 0;
> > + struct extent_io_tree *tree = &BTRFS_I(rac->mapping->host)->io_tree;
> > u64 prev_em_start = (u64)-1;
> > + int nr;
> >
> > - while (!list_empty(pages)) {
> > - u64 contig_end = 0;
> > -
> > - for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) {
> > - struct page *page = lru_to_page(pages);
> > -
> > - prefetchw(&page->flags);
> > - list_del(&page->lru);
> > - if (add_to_page_cache_lru(page, mapping, page->index,
> > - readahead_gfp_mask(mapping))) {
> > - put_page(page);
> > - break;
> > - }
> > -
> > - pagepool[nr++] = page;
> > - contig_end = page_offset(page) + PAGE_SIZE - 1;
> > - }
> > + readahead_for_each_batch(rac, pagepool, ARRAY_SIZE(pagepool), nr) {
> > + u64 contig_start = page_offset(pagepool[0]);
> > + u64 contig_end = page_offset(pagepool[nr - 1]) + PAGE_SIZE - 1;
>
> So this assumes a contiguous page range is returned, right?
Yes. That's documented in the readahead API and is the behaviour of
the code. I mean, btrfs asserts it's true while most of the rest of
the kernel is indifferent to it, but it's the documented and actual
behaviour.
> >
> > - if (nr) {
> > - u64 contig_start = page_offset(pagepool[0]);
> > + ASSERT(contig_start + nr * PAGE_SIZE - 1 == contig_end);
>
> Ok, yes it does. :)
>
> I don't see how readahead_for_each_batch() guarantees that, though.
I ... don't see how it doesn't? We start at rac->_start and iterate
through the consecutive pages in the page cache. readahead_for_each_batch()
does assume that __do_page_cache_readahead() has its current behaviour
of putting the pages in the page cache in order, and kicks off a new
call to ->readahead() every time it has to skip an index for whatever
reason (eg page already in page cache).
> > - if (bio)
> > - return submit_one_bio(bio, 0, bio_flags);
> > - return 0;
> > + if (bio) {
> > + if (submit_one_bio(bio, 0, bio_flags))
> > + return;
> > + }
> > }
>
> Shouldn't that just be
>
> if (bio)
> submit_one_bio(bio, 0, bio_flags);
It should, but some overzealous person decided to mark submit_one_bio()
as __must_check, so I have to work around that.
> > +static inline unsigned int readahead_page_batch(struct readahead_control *rac,
> > + struct page **array, unsigned int size)
> > +{
> > + unsigned int batch = 0;
>
> Confusing when put alongside rac->_batch_count counting the number
> of pages in the batch, and "batch" being the index into the page
> array, and they aren't the same counts....
Yes. Renamed to 'i'.
> > + XA_STATE(xas, &rac->mapping->i_pages, rac->_start);
> > + struct page *page;
> > +
> > + rac->_batch_count = 0;
> > + xas_for_each(&xas, page, rac->_start + rac->_nr_pages - 1) {
>
> That just iterates pages in the start,end doesn't it? What
> guarantees that this fills the array with a contiguous page range?
The behaviour of __do_page_cache_readahead(). Dave Howells also has a
usecase for xas_for_each_contig(), so I'm going to add that soon.
> > + VM_BUG_ON_PAGE(!PageLocked(page), page);
> > + VM_BUG_ON_PAGE(PageTail(page), page);
> > + array[batch++] = page;
> > + rac->_batch_count += hpage_nr_pages(page);
> > + if (PageHead(page))
> > + xas_set(&xas, rac->_start + rac->_batch_count);
>
> What on earth does this do? Comments please!
/*
* The page cache isn't using multi-index entries yet,
* so xas_for_each() won't do the right thing for
* large pages. This can be removed once the page cache
* is converted.
*/
> > +
> > + if (batch == size)
> > + break;
> > + }
> > +
> > + return batch;
> > +}
>
> Seems a bit big for an inline function.
It's only called by btrfs at the moment. If it gets more than one caller,
then sure, let's move it out of line.
> > +
> > +#define readahead_for_each_batch(rac, array, size, nr) \
> > + for (; (nr = readahead_page_batch(rac, array, size)); \
> > + readahead_next(rac))
>
> I had to go look at the caller to work out what "size" refered to
> here.
>
> This is complex enough that it needs proper API documentation.
How about just:
-#define readahead_for_each_batch(rac, array, size, nr) \
- for (; (nr = readahead_page_batch(rac, array, size)); \
+#define readahead_for_each_batch(rac, array, array_sz, nr) \
+ for (; (nr = readahead_page_batch(rac, array, array_sz)); \
(corresponding rename in readahead_page_batch). I mean, we could also
do:
#define readahead_for_each_batch(rac, array, nr) \
for (; (nr = readahead_page_batch(rac, array, ARRAY_SIZE(array)); \
readahead_next(rac))
making it less flexible, but easier to use.
Powered by blists - more mailing lists