[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260129200604.2518109-1-clm@meta.com>
Date: Thu, 29 Jan 2026 11:32:38 -0800
From: Chris Mason <clm@...a.com>
To: Kairui Song <ryncsn@...il.com>
CC: <linux-mm@...ck.org>, Andrew Morton <akpm@...ux-foundation.org>,
Baoquan
He <bhe@...hat.com>, Barry Song <baohua@...nel.org>,
Chris Li
<chrisl@...nel.org>, Nhat Pham <nphamcs@...il.com>,
Yosry Ahmed
<yosry.ahmed@...ux.dev>,
David Hildenbrand <david@...nel.org>,
Johannes
Weiner <hannes@...xchg.org>,
Youngjun Park <youngjun.park@....com>,
Hugh
Dickins <hughd@...gle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Ying
Huang <ying.huang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Matthew Wilcox (Oracle)"
<willy@...radead.org>,
<linux-kernel@...r.kernel.org>, Kairui Song
<kasong@...cent.com>,
<linux-pm@...r.kernel.org>,
"Rafael J. Wysocki (Intel)"
<rafael@...nel.org>
Subject: Re: [PATCH v5 14/19] mm, swap: cleanup swap entry management workflow
Kairui Song <ryncsn@...il.com> wrote:
> From: Kairui Song <kasong@...cent.com>
>
> The current swap entry allocation/freeing workflow has never had a clear
> definition. This makes it hard to debug or add new optimizations.
>
> This commit introduces a proper definition of how swap entries would be
> allocated and freed. Now, most operations are folio based, so they will
> never exceed one swap cluster, and we now have a cleaner border between
> swap and the rest of mm, making it much easier to follow and debug,
> especially with new added sanity checks. Also making more optimization
> possible.
>
> Swap entry will be mostly allocated and free with a folio bound.
> The folio lock will be useful for resolving many swap ralated races.
>
> Now swap allocation (except hibernation) always starts with a folio in
> the swap cache, and gets duped/freed protected by the folio lock:
[ ... ]
Hi everyone,
Apologies for looping back around on this one, but I've got some prompt
changes that break down larger commits into sections and run them
independently through review. I reran it on mm-new, and it found something
new:
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 64970ee11fcf7..85bf4f7d9ae7b 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
[ ... ]
> @@ -2195,7 +2265,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> /*
> * Some architectures may have to restore extra metadata to the page
> * when reading from swap. This metadata may be indexed by swap entry
> - * so this must be called before swap_free().
> + * so this must be called before folio_put_swap().
> */
> arch_swap_restore(folio_swap(entry, folio), folio);
>
> @@ -2236,7 +2306,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> new_pte = pte_mkuffd_wp(new_pte);
> setpte:
> set_pte_at(vma->vm_mm, addr, pte, new_pte);
> - swap_free(entry);
> + folio_put_swap(folio, page);
Does this corrupt the swap map when ksm_might_need_to_copy() returns a
new folio? In that case, folio != swapcache, and the new folio is NOT in
the swap cache with folio->swap = 0. This would trigger
VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio) and call
swap_entries_put_map() with entry.val = 0.
Compare with do_swap_page() which correctly uses folio_put_swap(swapcache,
NULL) when folio != swapcache. Should this use the original entry parameter
or the swapcache folio instead?
Powered by blists - more mailing lists