[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <161918453173.3145707.484012520187142542.stgit@warthog.procyon.org.uk>
Date: Fri, 23 Apr 2021 14:28:51 +0100
From: David Howells <dhowells@...hat.com>
To: linux-fsdevel@...r.kernel.org
Cc: Jeff Layton <jlayton@...nel.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
dhowells@...hat.com,
Trond Myklebust <trond.myklebust@...merspace.com>,
Anna Schumaker <anna.schumaker@...app.com>,
Steve French <sfrench@...ba.org>,
Dominique Martinet <asmadeus@...ewreck.org>,
Jeff Layton <jlayton@...hat.com>,
David Wysochanski <dwysocha@...hat.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-cachefs@...hat.com, linux-afs@...ts.infradead.org,
linux-nfs@...r.kernel.org, linux-cifs@...r.kernel.org,
ceph-devel@...r.kernel.org, v9fs-developer@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH v7 05/31] mm/readahead: Handle ractl nr_pages being modified
From: Matthew Wilcox (Oracle) <willy@...radead.org>
Filesystems are not currently permitted to modify the number of pages
in the ractl. An upcoming patch to add readahead_expand() changes that
rule, so remove the check and resync the loop counter after every call
to the filesystem.
Tested-by: Jeff Layton <jlayton@...nel.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Signed-off-by: David Howells <dhowells@...hat.com>
Link: https://lore.kernel.org/r/20210420200116.3715790-1-willy@infradead.org/
Link: https://lore.kernel.org/r/20210421170923.4005574-1-willy@infradead.org/ # v2
---
mm/readahead.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index 2088569a947e..5b423ecc99f1 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -198,8 +198,6 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
for (i = 0; i < nr_to_read; i++) {
struct page *page = xa_load(&mapping->i_pages, index + i);
- BUG_ON(index + i != ractl->_index + ractl->_nr_pages);
-
if (page && !xa_is_value(page)) {
/*
* Page already present? Kick off the current batch
@@ -210,6 +208,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
* not worth getting one just for that.
*/
read_pages(ractl, &page_pool, true);
+ i = ractl->_index + ractl->_nr_pages - index - 1;
continue;
}
@@ -223,6 +222,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
gfp_mask) < 0) {
put_page(page);
read_pages(ractl, &page_pool, true);
+ i = ractl->_index + ractl->_nr_pages - index - 1;
continue;
}
if (i == nr_to_read - lookahead_size)
Powered by blists - more mailing lists