[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20201015172721.31ef7d5e@canb.auug.org.au>
Date: Thu, 15 Oct 2020 17:27:21 +1100
From: Stephen Rothwell <sfr@...b.auug.org.au>
To: Andrew Morton <akpm@...ux-foundation.org>,
Jens Axboe <axboe@...nel.dk>
Cc: David Howells <dhowells@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Next Mailing List <linux-next@...r.kernel.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>
Subject: linux-next: manual merge of the akpm-current tree with the block
tree
Hi all,
Today's linux-next merge of the akpm-current tree got a conflict in:
mm/readahead.c
between commit:
fd0ec96ec35d ("readahead: use limited read-ahead to satisfy read")
from the block tree and commits:
16681dc9dd92 ("mm/readahead: pass readahead_control to force_page_cache_ra")
f65bd470e7ed ("mm/readahead: add page_cache_sync_ra and page_cache_async_ra")
from the akpm-current tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
--
Cheers,
Stephen Rothwell
diff --cc mm/readahead.c
index e5975f4e0ee5,c6ffb76827da..000000000000
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@@ -548,42 -545,23 +545,29 @@@ readit
}
}
- ra_submit(ra, mapping, filp);
+ ractl->_index = ra->start;
+ do_page_cache_ra(ractl, ra->size, ra->async_size);
}
- /**
- * page_cache_sync_readahead - generic file readahead
- * @mapping: address_space which holds the pagecache and I/O vectors
- * @ra: file_ra_state which holds the readahead state
- * @filp: passed on to ->readpage() and ->readpages()
- * @index: Index of first page to be read.
- * @req_count: Total number of pages being read by the caller.
- *
- * page_cache_sync_readahead() should be called when a cache miss happened:
- * it will submit the read. The readahead logic may decide to piggyback more
- * pages onto the read request if access patterns suggest it will improve
- * performance.
- */
- void page_cache_sync_readahead(struct address_space *mapping,
- struct file_ra_state *ra, struct file *filp,
- pgoff_t index, unsigned long req_count)
+ void page_cache_sync_ra(struct readahead_control *ractl,
+ struct file_ra_state *ra, unsigned long req_count)
{
- bool do_forced_ra = filp && (filp->f_mode & FMODE_RANDOM);
- /* no read-ahead */
- if (!ra->ra_pages)
- return;
++ bool do_forced_ra = ractl->file && (ractl->file->f_mode & FMODE_RANDOM);
- if (blk_cgroup_congested())
- return;
+ /*
+ * Even if read-ahead is disabled, start this request as read-ahead.
+ * This makes regular read-ahead disabled use the same path as normal
+ * reads, instead of having to punt to ->readpage() manually. We limit
+ * ourselves to 1 page for this case, to avoid causing problems if
+ * we're congested or tight on memory.
+ */
+ if (!ra->ra_pages || blk_cgroup_congested()) {
+ req_count = 1;
+ do_forced_ra = true;
+ }
- /* be dumb */
- if (ractl->file && (ractl->file->f_mode & FMODE_RANDOM)) {
+ if (do_forced_ra) {
- force_page_cache_readahead(mapping, filp, index, req_count);
+ force_page_cache_ra(ractl, ra, req_count);
return;
}
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists