[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.2104221347240.1170@eggly.anvils>
Date: Thu, 22 Apr 2021 13:48:57 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Matthew Wilcox <willy@...radead.org>,
Hugh Dickins <hughd@...gle.com>,
William Kucharski <william.kucharski@...cle.com>,
Christoph Hellwig <hch@....de>, Jan Kara <jack@...e.cz>,
Dave Chinner <dchinner@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH v2 2/2] mm/filemap: fix mapping_seek_hole_data on THP &
32-bit
No problem on 64-bit without huge pages, but xfstests generic/285
and other SEEK_HOLE/SEEK_DATA tests have regressed on huge tmpfs,
and on 32-bit architectures, with the new mapping_seek_hole_data().
Several different bugs turned out to need fixing.
u64 cast to stop losing bits when converting unsigned long to loff_t
(and let's use shifts throughout, rather than mixed with * and /).
Use round_up() when advancing pos, to stop assuming that pos was
already THP-aligned when advancing it by THP-size. (This use of
round_up() assumes that any THP has THP-aligned index: true at present
and true going forward, but could be recoded to avoid the assumption.)
Use xas_set() when iterating away from a THP, so that xa_index stays
in synch with start, instead of drifting away to return bogus offset.
Check start against end to avoid wrapping 32-bit xa_index to 0 (and
to handle these additional cases, seek_data or not, it's easier to
break the loop than goto: so rearrange exit from the function).
Fixes: 41139aa4c3a3 ("mm/filemap: add mapping_seek_hole_data")
Signed-off-by: Hugh Dickins <hughd@...gle.com>
---
v2: Removed all but one of v1's u64 casts, as suggested my Matthew.
Updated commit message on u64 cast and THP alignment, per Matthew.
Andrew, I'd have just sent a -fix.patch to remove the unnecessary u64s,
but need to reword the commit message: so please replace yesterday's
mm-filemap-fix-mapping_seek_hole_data-on-thp-32-bit.patch
by this one - thanks.
mm/filemap.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
--- 5.12-rc8/mm/filemap.c 2021-02-26 19:42:39.812156085 -0800
+++ linux/mm/filemap.c 2021-04-21 22:58:03.699655576 -0700
@@ -2672,7 +2672,7 @@ loff_t mapping_seek_hole_data(struct add
loff_t end, int whence)
{
XA_STATE(xas, &mapping->i_pages, start >> PAGE_SHIFT);
- pgoff_t max = (end - 1) / PAGE_SIZE;
+ pgoff_t max = (end - 1) >> PAGE_SHIFT;
bool seek_data = (whence == SEEK_DATA);
struct page *page;
@@ -2681,7 +2681,8 @@ loff_t mapping_seek_hole_data(struct add
rcu_read_lock();
while ((page = find_get_entry(&xas, max, XA_PRESENT))) {
- loff_t pos = xas.xa_index * PAGE_SIZE;
+ loff_t pos = (u64)xas.xa_index << PAGE_SHIFT;
+ unsigned int seek_size;
if (start < pos) {
if (!seek_data)
@@ -2689,25 +2690,25 @@ loff_t mapping_seek_hole_data(struct add
start = pos;
}
- pos += seek_page_size(&xas, page);
+ seek_size = seek_page_size(&xas, page);
+ pos = round_up(pos + 1, seek_size);
start = page_seek_hole_data(&xas, mapping, page, start, pos,
seek_data);
if (start < pos)
goto unlock;
+ if (start >= end)
+ break;
+ if (seek_size > PAGE_SIZE)
+ xas_set(&xas, pos >> PAGE_SHIFT);
if (!xa_is_value(page))
put_page(page);
}
- rcu_read_unlock();
-
if (seek_data)
- return -ENXIO;
- goto out;
-
+ start = -ENXIO;
unlock:
rcu_read_unlock();
- if (!xa_is_value(page))
+ if (page && !xa_is_value(page))
put_page(page);
-out:
if (start > end)
return end;
return start;
Powered by blists - more mailing lists