lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c336e6e4-da7f-b714-c0f1-12df715f2611@google.com>
Date: Thu, 29 Aug 2024 01:07:17 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, 
    Andrew Morton <akpm@...ux-foundation.org>
cc: Hugh Dickins <hughd@...gle.com>, willy@...radead.org, david@...hat.com, 
    wangkefeng.wang@...wei.com, chrisl@...nel.org, ying.huang@...el.com, 
    21cnbao@...il.com, ryan.roberts@....com, shy828301@...il.com, 
    ziy@...dia.com, ioworker0@...il.com, da.gomez@...sung.com, 
    p.raghav@...sung.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 4/9] mm: filemap: use xa_get_order() to get the swap
 entry order

On Tue, 27 Aug 2024, Baolin Wang wrote:
> On 2024/8/26 05:55, Hugh Dickins wrote:
> > On Mon, 12 Aug 2024, Baolin Wang wrote:
> > 
> >> In the following patches, shmem will support the swap out of large folios,
> >> which means the shmem mappings may contain large order swap entries, so
> >> using xa_get_order() to get the folio order of the shmem swap entry to
> >> update the '*start' correctly.
> >>
> >> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> >> ---
> >>   mm/filemap.c | 4 ++++
> >>   1 file changed, 4 insertions(+)
> >>
> >> diff --git a/mm/filemap.c b/mm/filemap.c
> >> index 4130be74f6fd..4c312aab8b1f 100644
> >> --- a/mm/filemap.c
> >> +++ b/mm/filemap.c
> >> @@ -2056,6 +2056,8 @@ unsigned find_get_entries(struct address_space
> >> *mapping, pgoff_t *start,
> >>     folio = fbatch->folios[idx];
> >>     if (!xa_is_value(folio))
> >>   			nr = folio_nr_pages(folio);
> >> +		else
> >> +			nr = 1 << xa_get_order(&mapping->i_pages,
> >> indices[idx]);
> >>    	*start = indices[idx] + nr;
> >>    }
> >>    return folio_batch_count(fbatch);
> >> @@ -2120,6 +2122,8 @@ unsigned find_lock_entries(struct address_space
> >> *mapping, pgoff_t *start,
> >>     folio = fbatch->folios[idx];
> >>     if (!xa_is_value(folio))
> >>   			nr = folio_nr_pages(folio);
> >> +		else
> >> +			nr = 1 << xa_get_order(&mapping->i_pages,
> >> indices[idx]);
> >>    	*start = indices[idx] + nr;
> >>    }
> >>    return folio_batch_count(fbatch);
> >> -- 
> > 
> > Here we have a problem, but I'm not suggesting a fix for it yet: I
> > need to get other fixes out first, then turn to thinking about this -
> > it's not easy.
> 
> Thanks for raising the issues.
> 
> > 
> > That code is almost always right, so it works well enough for most
> > people not to have noticed, but there are two issues with it.
> > 
> > The first issue is that it's assuming indices[idx] is already aligned
> > to nr: not necessarily so.  I believe it was already wrong in the
> > folio_nr_pages() case, but the time I caught it wrong with a printk
> > was in the swap (value) case.  (I may not be stating this correctly:
> > again more thought needed but I can't spend so long writing.)
> > 
> > And immediately afterwards that kernel build failed with a corrupted
> > (all 0s) .o file - I'm building on ext4 on /dev/loop0 on huge tmpfs while
> > swapping, and happen to be using the "-o discard" option to ext4 mount.
> > 
> > I've been chasing these failures (maybe a few minutes in, maybe half an
> > hour) for days, then had the idea of trying without the "-o discard":
> > whereupon these builds can be repeated successfully for many hours.
> > IIRC ext4 discard to /dev/loop0 entails hole-punch to the tmpfs.
> > 
> > The alignment issue can easily be corrected, but that might not help.
> > (And those two functions would look better with the rcu_read_unlock()
> > moved down to just before returning: but again, wouldn't really help.)
> > 
> > The second issue is that swap is more slippery to work with than
> > folios or pages: in the folio_nr_pages() case, that number is stable
> > because we hold a refcount (which stops a THP from being split), and
> > later we'll be taking folio lock too.  None of that in the swap case,
> > so (depending on how a large entry gets split) the xa_get_order() result
> > is not reliable. Likewise for other uses of xa_get_order() in this series.
> 
> Now we have 2 users of xa_get_order() in this series:
> 
> 1) shmem_partial_swap_usage(): this is acceptable, since racy results are not
> a problem for the swap statistics.

Yes: there might be room for improvement, but no big deal there.

> 
> 2) shmem_undo_range(): when freeing a large swap entry, it will use
> xa_cmpxchg_irq() to make sure the swap value is not changed (in case the large
> swap entry is split). If failed to cmpxchg, then it will use current index to
> retry in shmem_undo_range(). So seems not too bad?

Right, I was missing the cmpxchg aspect. I am not entirely convinced of
the safety in proceeding in this way, but I shouldn't spread FUD without
justification. Today, no yesterday, I realized what might be the actual
problem, and it's not at all these entry splitting races I had suspected.

Fix below.  Successful testing on mm-everything-2024-08-24-07-21 (well,
that minus the commit which spewed warnings from bootup) confirmed it.
But testing on mm-everything-2024-08-28-21-38 very quickly failed:
unrelated to this series, presumably caused by patch or patches added
since 08-24, one kind of crash on one machine (some memcg thing called
from isolate_migratepages_block), another kind of crash on another (some
memcg thing called from __read_swap_cache_async), I'm exhausted by now
but will investigate later in the day (or hope someone else has).

[PATCH] mm: filemap: use xa_get_order() to get the swap entry order: fix

find_lock_entries(), used in the first pass of shmem_undo_range() and
truncate_inode_pages_range() before partial folios are dealt with, has
to be careful to avoid those partial folios: as its doc helpfully says,
"Folios which are partially outside the range are not returned".  Of
course, the same must be true of any value entries returned, otherwise
truncation and hole-punch risk erasing swapped areas - as has been seen.

Rewrite find_lock_entries() to emphasize that, following the same pattern
for folios and for value entries.

Adjust find_get_entries() slightly, to get order while still holding
rcu_read_lock(), and to round down the updated start: good changes, like
find_lock_entries() now does, but it's unclear if either is ever important.

Signed-off-by: Hugh Dickins <hughd@...gle.com>
---
 mm/filemap.c | 41 +++++++++++++++++++++++++----------------
 1 file changed, 25 insertions(+), 16 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 885a8ed9d00d..88a2ed008474 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2047,10 +2047,9 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start,
 		if (!folio_batch_add(fbatch, folio))
 			break;
 	}
-	rcu_read_unlock();
 
 	if (folio_batch_count(fbatch)) {
-		unsigned long nr = 1;
+		unsigned long nr;
 		int idx = folio_batch_count(fbatch) - 1;
 
 		folio = fbatch->folios[idx];
@@ -2058,8 +2057,10 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start,
 			nr = folio_nr_pages(folio);
 		else
 			nr = 1 << xa_get_order(&mapping->i_pages, indices[idx]);
-		*start = indices[idx] + nr;
+		*start = round_down(indices[idx] + nr, nr);
 	}
+	rcu_read_unlock();
+
 	return folio_batch_count(fbatch);
 }
 
@@ -2091,10 +2092,17 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start,
 
 	rcu_read_lock();
 	while ((folio = find_get_entry(&xas, end, XA_PRESENT))) {
+		unsigned long base;
+		unsigned long nr;
+
 		if (!xa_is_value(folio)) {
-			if (folio->index < *start)
+			nr = folio_nr_pages(folio);
+			base = folio->index;
+			/* Omit large folio which begins before the start */
+			if (base < *start)
 				goto put;
-			if (folio_next_index(folio) - 1 > end)
+			/* Omit large folio which extends beyond the end */
+			if (base + nr - 1 > end)
 				goto put;
 			if (!folio_trylock(folio))
 				goto put;
@@ -2103,7 +2111,19 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start,
 				goto unlock;
 			VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index),
 					folio);
+		} else {
+			nr = 1 << xa_get_order(&mapping->i_pages, xas.xa_index);
+			base = xas.xa_index & ~(nr - 1);
+			/* Omit order>0 value which begins before the start */
+			if (base < *start)
+				continue;
+			/* Omit order>0 value which extends beyond the end */
+			if (base + nr - 1 > end)
+				break;
 		}
+
+		/* Update start now so that last update is correct on return */
+		*start = base + nr;
 		indices[fbatch->nr] = xas.xa_index;
 		if (!folio_batch_add(fbatch, folio))
 			break;
@@ -2115,17 +2135,6 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start,
 	}
 	rcu_read_unlock();
 
-	if (folio_batch_count(fbatch)) {
-		unsigned long nr = 1;
-		int idx = folio_batch_count(fbatch) - 1;
-
-		folio = fbatch->folios[idx];
-		if (!xa_is_value(folio))
-			nr = folio_nr_pages(folio);
-		else
-			nr = 1 << xa_get_order(&mapping->i_pages, indices[idx]);
-		*start = indices[idx] + nr;
-	}
 	return folio_batch_count(fbatch);
 }
 
-- 
2.35.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ