[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aXxkANcET3l2Xu6J@KASONG-MC4>
Date: Sat, 31 Jan 2026 00:48:07 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chris Mason <clm@...a.com>, Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, Baoquan He <bhe@...hat.com>,
Barry Song <baohua@...nel.org>, Chris Li <chrisl@...nel.org>, Nhat Pham <nphamcs@...il.com>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, David Hildenbrand <david@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, Youngjun Park <youngjun.park@....com>,
Hugh Dickins <hughd@...gle.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Ying Huang <ying.huang@...ux.alibaba.com>, Kemeng Shi <shikemeng@...weicloud.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-kernel@...r.kernel.org, Kairui Song <kasong@...cent.com>, linux-pm@...r.kernel.org,
"Rafael J. Wysocki (Intel)" <rafael@...nel.org>
Subject: Re: [PATCH v5 14/19] mm, swap: cleanup swap entry management workflow
On Thu, Jan 29, 2026 at 11:32:38AM +0800, Chris Mason wrote:
> Kairui Song <ryncsn@...il.com> wrote:
> > From: Kairui Song <kasong@...cent.com>
> >
> > The current swap entry allocation/freeing workflow has never had a clear
> > definition. This makes it hard to debug or add new optimizations.
> >
> > This commit introduces a proper definition of how swap entries would be
> > allocated and freed. Now, most operations are folio based, so they will
> > never exceed one swap cluster, and we now have a cleaner border between
> > swap and the rest of mm, making it much easier to follow and debug,
> > especially with new added sanity checks. Also making more optimization
> > possible.
> >
> > Swap entry will be mostly allocated and free with a folio bound.
> > The folio lock will be useful for resolving many swap ralated races.
> >
> > Now swap allocation (except hibernation) always starts with a folio in
> > the swap cache, and gets duped/freed protected by the folio lock:
>
> [ ... ]
>
> Hi everyone,
>
> Apologies for looping back around on this one, but I've got some prompt
> changes that break down larger commits into sections and run them
> independently through review. I reran it on mm-new, and it found something
> new:
>
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 64970ee11fcf7..85bf4f7d9ae7b 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
>
> [ ... ]
>
> > @@ -2195,7 +2265,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> > /*
> > * Some architectures may have to restore extra metadata to the page
> > * when reading from swap. This metadata may be indexed by swap entry
> > - * so this must be called before swap_free().
> > + * so this must be called before folio_put_swap().
> > */
> > arch_swap_restore(folio_swap(entry, folio), folio);
> >
> > @@ -2236,7 +2306,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> > new_pte = pte_mkuffd_wp(new_pte);
> > setpte:
> > set_pte_at(vma->vm_mm, addr, pte, new_pte);
> > - swap_free(entry);
> > + folio_put_swap(folio, page);
>
> Does this corrupt the swap map when ksm_might_need_to_copy() returns a
> new folio? In that case, folio != swapcache, and the new folio is NOT in
> the swap cache with folio->swap = 0. This would trigger
> VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio) and call
> swap_entries_put_map() with entry.val = 0.
>
> Compare with do_swap_page() which correctly uses folio_put_swap(swapcache,
> NULL) when folio != swapcache. Should this use the original entry parameter
> or the swapcache folio instead?
Thanks again for running the AI review. And it's really helpful.
This is a valid case, I missed the KSM copy pages for swapoff indeed.
We do need the following change squashed as you suggested.
Hi Andrew, can you help squash add following fix? I just ran more
stress tests with KSM and racing swapoff, and everything is looking
good now.
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 8c0f31363c1f..d652486898de 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2305,7 +2305,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
new_pte = pte_mkuffd_wp(new_pte);
setpte:
set_pte_at(vma->vm_mm, addr, pte, new_pte);
- folio_put_swap(folio, page);
+ folio_put_swap(swapcache, folio_file_page(swapcache, swp_offset(entry)));
out:
if (pte)
pte_unmap_unlock(pte, ptl);
Powered by blists - more mailing lists