[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7A+DBw=z8RPP-P1hcCH4Mid0txfmKqgqXghoE_v7zGEoA@mail.gmail.com>
Date: Fri, 25 Jul 2025 01:02:28 +0800
From: Kairui Song <ryncsn@...il.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>, Kemeng Shi <shikemeng@...weicloud.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org
Cc: Hugh Dickins <hughd@...gle.com>, Matthew Wilcox <willy@...radead.org>, Chris Li <chrisl@...nel.org>,
Nhat Pham <nphamcs@...il.com>, Baoquan He <bhe@...hat.com>, Barry Song <baohua@...nel.org>,
linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH v5 1/8] mm/shmem, swap: improve cached mTHP handling and
fix potential hung
On Thu, Jul 10, 2025 at 11:37 AM Kairui Song <ryncsn@...il.com> wrote:
>
> From: Kairui Song <kasong@...cent.com>
>
> The current swap-in code assumes that, when a swap entry in shmem mapping
> is order 0, its cached folios (if present) must be order 0 too, which
> turns out not always correct.
>
> The problem is shmem_split_large_entry is called before verifying the
> folio will eventually be swapped in, one possible race is:
>
> CPU1 CPU2
> shmem_swapin_folio
> /* swap in of order > 0 swap entry S1 */
> folio = swap_cache_get_folio
> /* folio = NULL */
> order = xa_get_order
> /* order > 0 */
> folio = shmem_swap_alloc_folio
> /* mTHP alloc failure, folio = NULL */
> <... Interrupted ...>
> shmem_swapin_folio
> /* S1 is swapped in */
> shmem_writeout
> /* S1 is swapped out, folio cached */
> shmem_split_large_entry(..., S1)
> /* S1 is split, but the folio covering it has order > 0 now */
>
> Now any following swapin of S1 will hang: `xa_get_order` returns 0, and
> folio lookup will return a folio with order > 0. The
> `xa_get_order(&mapping->i_pages, index) != folio_order(folio)` will always
> return false causing swap-in to return -EEXIST.
>
> And this looks fragile. So fix this up by allowing seeing a larger folio
> in swap cache, and check the whole shmem mapping range covered by the
> swapin have the right swap value upon inserting the folio. And drop the
> redundant tree walks before the insertion.
>
> This will actually improve performance, as it avoids two redundant Xarray
> tree walks in the hot path, and the only side effect is that in the
> failure path, shmem may redundantly reallocate a few folios causing
> temporary slight memory pressure.
>
> And worth noting, it may seems the order and value check before inserting
> might help reducing the lock contention, which is not true. The swap
> cache layer ensures raced swapin will either see a swap cache folio or
> failed to do a swapin (we have SWAP_HAS_CACHE bit even if swap cache is
> bypassed), so holding the folio lock and checking the folio flag is
> already good enough for avoiding the lock contention. The chance that a
> folio passes the swap entry value check but the shmem mapping slot has
> changed should be very low.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Signed-off-by: Kairui Song <kasong@...cent.com>
> Reviewed-by: Kemeng Shi <shikemeng@...weicloud.com>
> Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Tested-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Cc: <stable@...r.kernel.org>
> ---
> mm/shmem.c | 30 +++++++++++++++++++++---------
> 1 file changed, 21 insertions(+), 9 deletions(-)
Hi All,
Just found some issue here with this patch...
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 334b7b4a61a0..e3c9a1365ff4 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -884,7 +884,9 @@ static int shmem_add_to_page_cache(struct folio *folio,
> pgoff_t index, void *expected, gfp_t gfp)
> {
> XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio));
> - long nr = folio_nr_pages(folio);
> + unsigned long nr = folio_nr_pages(folio);
> + swp_entry_t iter, swap;
> + void *entry;
>
> VM_BUG_ON_FOLIO(index != round_down(index, nr), folio);
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> @@ -896,14 +898,24 @@ static int shmem_add_to_page_cache(struct folio *folio,
>
> gfp &= GFP_RECLAIM_MASK;
> folio_throttle_swaprate(folio, gfp);
> + swap = iter = radix_to_swp_entry(expected);
>
> do {
> xas_lock_irq(&xas);
I missed a xas_reset here, also better reset iter value too.
> - if (expected != xas_find_conflict(&xas)) {
> - xas_set_err(&xas, -EEXIST);
> - goto unlock;
> + xas_for_each_conflict(&xas, entry) {
> + /*
> + * The range must either be empty, or filled with
> + * expected swap entries. Shmem swap entries are never
> + * partially freed without split of both entry and
> + * folio, so there shouldn't be any holes.
> + */
> + if (!expected || entry != swp_to_radix_entry(iter)) {
> + xas_set_err(&xas, -EEXIST);
> + goto unlock;
> + }
> + iter.val += 1 << xas_get_order(&xas);
> }
> - if (expected && xas_find_conflict(&xas)) {
> + if (expected && iter.val - nr != swap.val) {
> xas_set_err(&xas, -EEXIST);
> goto unlock;
> }
> @@ -2323,7 +2335,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> error = -ENOMEM;
> goto failed;
> }
> - } else if (order != folio_order(folio)) {
> + } else if (order > folio_order(folio)) {
> /*
> * Swap readahead may swap in order 0 folios into swapcache
> * asynchronously, while the shmem mapping can still stores
> @@ -2348,15 +2360,15 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>
> swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
> }
> + } else if (order < folio_order(folio)) {
> + swap.val = round_down(swap.val, 1 << folio_order(folio));
> }
>
> alloced:
> /* We have to do this with folio locked to prevent races */
> folio_lock(folio);
> if ((!skip_swapcache && !folio_test_swapcache(folio)) ||
> - folio->swap.val != swap.val ||
> - !shmem_confirm_swap(mapping, index, swap) ||
> - xa_get_order(&mapping->i_pages, index) != folio_order(folio)) {
And this part is incorrect. This `shmem_confirm_swap(mapping, index,
swap) ` can't be simply omitted. Some functions below before the
shmem_add_to_page_cache shouldn't be called on folios might have
already been mapped by others. This shmem_confirm_swap ensures that
won't happen.
It may seem like a small change, but it leads to some minor conflicts
in one or two following commits, the benchmark result will change too.
So I'll have to send a V6 I think.
We can remove this `shmem_confirm_swap`, but not in this series I
think, maybe after this. Need to re-arrange some functions, with some
clean ups for shmem_add_to_page_cache and others.
> + folio->swap.val != swap.val) {
> error = -EEXIST;
> goto unlock;
> }
> --
> 2.50.0
>
In summary, I'll squash this patch into it and do a rebase of later commits:
diff --git a/mm/shmem.c b/mm/shmem.c
index e3c9a1365ff4..4ca0b665b79e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -898,9 +898,11 @@ static int shmem_add_to_page_cache(struct folio *folio,
gfp &= GFP_RECLAIM_MASK;
folio_throttle_swaprate(folio, gfp);
- swap = iter = radix_to_swp_entry(expected);
+ swap = radix_to_swp_entry(expected);
do {
+ iter = swap;
+ xas_reset(&xas);
xas_lock_irq(&xas);
xas_for_each_conflict(&xas, entry) {
/*
@@ -2365,9 +2367,16 @@ static int shmem_swapin_folio(struct inode
*inode, pgoff_t index,
}
alloced:
- /* We have to do this with folio locked to prevent races */
+ /*
+ * We have to do this with folio locked to prevent races.
+ * The shmem_confirm_swap below only checks if the first swap
+ * entry matches the folio, that's enough to ensure the folio
+ * is not used outside of shmem, as shmem swap entrie
+ * and swap cache folios are never partially freed.
+ */
folio_lock(folio);
if ((!skip_swapcache && !folio_test_swapcache(folio)) ||
+ !shmem_confirm_swap(mapping, index, swap) ||
folio->swap.val != swap.val) {
error = -EEXIST;
goto unlock;
And I'll do some clean up afterward to get rid of this
shmem_confirm_swap. How do you think?
Powered by blists - more mailing lists