[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200320142231.2402-13-willy@infradead.org>
Date: Fri, 20 Mar 2020 07:22:18 -0700
From: Matthew Wilcox <willy@...radead.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-btrfs@...r.kernel.org,
linux-erofs@...ts.ozlabs.org, linux-ext4@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, cluster-devel@...hat.com,
ocfs2-devel@....oracle.com, linux-xfs@...r.kernel.org,
John Hubbard <jhubbard@...dia.com>,
William Kucharski <william.kucharski@...cle.com>
Subject: [PATCH v9 12/25] mm: Move end_index check out of readahead loop
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
By reducing nr_to_read, we can eliminate this check from inside the loop.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Reviewed-by: John Hubbard <jhubbard@...dia.com>
Reviewed-by: William Kucharski <william.kucharski@...cle.com>
---
mm/readahead.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index d01531ef9f3c..a37b68f66233 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -167,8 +167,6 @@ void __do_page_cache_readahead(struct address_space *mapping,
unsigned long lookahead_size)
{
struct inode *inode = mapping->host;
- struct page *page;
- unsigned long end_index; /* The last page we want to read */
LIST_HEAD(page_pool);
loff_t isize = i_size_read(inode);
gfp_t gfp_mask = readahead_gfp_mask(mapping);
@@ -178,22 +176,29 @@ void __do_page_cache_readahead(struct address_space *mapping,
._index = index,
};
unsigned long i;
+ pgoff_t end_index; /* The last page we want to read */
if (isize == 0)
return;
- end_index = ((isize - 1) >> PAGE_SHIFT);
+ end_index = (isize - 1) >> PAGE_SHIFT;
+ if (index > end_index)
+ return;
+ /* Avoid wrapping to the beginning of the file */
+ if (index + nr_to_read < index)
+ nr_to_read = ULONG_MAX - index + 1;
+ /* Don't read past the page containing the last byte of the file */
+ if (index + nr_to_read >= end_index)
+ nr_to_read = end_index - index + 1;
/*
* Preallocate as many pages as we will need.
*/
for (i = 0; i < nr_to_read; i++) {
- if (index + i > end_index)
- break;
+ struct page *page = xa_load(&mapping->i_pages, index + i);
BUG_ON(index + i != rac._index + rac._nr_pages);
- page = xa_load(&mapping->i_pages, index + i);
if (page && !xa_is_value(page)) {
/*
* Page already present? Kick off the current batch of
--
2.25.1
Powered by blists - more mailing lists