lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7DtP4j4kQp-bM+jrVSkCh26+8U-TkENxUmPkmEJyZg9YA@mail.gmail.com>
Date: Sat, 1 Nov 2025 16:59:05 +0800
From: Kairui Song <ryncsn@...il.com>
To: YoungJun Park <youngjun.park@....com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>, 
	Baoquan He <bhe@...hat.com>, Barry Song <baohua@...nel.org>, Chris Li <chrisl@...nel.org>, 
	Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>, 
	Yosry Ahmed <yosry.ahmed@...ux.dev>, David Hildenbrand <david@...hat.com>, 
	Hugh Dickins <hughd@...gle.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>, 
	"Huang, Ying" <ying.huang@...ux.alibaba.com>, Kemeng Shi <shikemeng@...weicloud.com>, 
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, 
	"Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 14/19] mm, swap: sanitize swap entry management workflow

On Sat, Nov 1, 2025 at 12:51 PM YoungJun Park <youngjun.park@....com> wrote:
>
> On Wed, Oct 29, 2025 at 11:58:40PM +0800, Kairui Song wrote:
> > From: Kairui Song <kasong@...cent.com>
>
> Hello Kairui!
>
> > The current swap entry allocation/freeing workflow has never had a clear
> > definition. This makes it hard to debug or add new optimizations.
> >
> > This commit introduces a proper definition of how swap entries would be
> > allocated and freed. Now, most operations are folio based, so they will
> > never exceed one swap cluster, and we now have a cleaner border between
> > swap and the rest of mm, making it much easier to follow and debug,
> > especially with new added sanity checks. Also making more optimization
> > possible.
> >
> > Swap entry will be mostly allocated and free with a folio bound.
> > The folio lock will be useful for resolving many swap ralated races.
> >
> > Now swap allocation (except hibernation) always starts with a folio in
> > the swap cache, and gets duped/freed protected by the folio lock:
> >
> > - folio_alloc_swap() - The only allocation entry point now.
> >   Context: The folio must be locked.
> >   This allocates one or a set of continuous swap slots for a folio and
> >   binds them to the folio by adding the folio to the swap cache. The
> >   swap slots' swap count start with zero value.
> >
> > - folio_dup_swap() - Increase the swap count of one or more entries.
> >   Context: The folio must be locked and in the swap cache. For now, the
> >   caller still has to lock the new swap entry owner (e.g., PTL).
> >   This increases the ref count of swap entries allocated to a folio.
> >   Newly allocated swap slots' count has to be increased by this helper
> >   as the folio got unmapped (and swap entries got installed).
> >
> > - folio_put_swap() - Decrease the swap count of one or more entries.
> >   Context: The folio must be locked and in the swap cache. For now, the
> >   caller still has to lock the new swap entry owner (e.g., PTL).
> >   This decreases the ref count of swap entries allocated to a folio.
> >   Typically, swapin will decrease the swap count as the folio got
> >   installed back and the swap entry got uninstalled
> >
> >   This won't remove the folio from the swap cache and free the
> >   slot. Lazy freeing of swap cache is helpful for reducing IO.
> >   There is already a folio_free_swap() for immediate cache reclaim.
> >   This part could be further optimized later.
> >
> > The above locking constraints could be further relaxed when the swap
> > table if fully implemented. Currently dup still needs the caller
> > to lock the swap entry container (e.g. PTL), or a concurrent zap
> > may underflow the swap count.
> >
> > Some swap users need to interact with swap count without involving folio
> > (e.g. forking/zapping the page table or mapping truncate without swapin).
> > In such cases, the caller has to ensure there is no race condition on
> > whatever owns the swap count and use the below helpers:
> >
> > - swap_put_entries_direct() - Decrease the swap count directly.
> >   Context: The caller must lock whatever is referencing the slots to
> >   avoid a race.
> >
> >   Typically the page table zapping or shmem mapping truncate will need
> >   to free swap slots directly. If a slot is cached (has a folio bound),
> >   this will also try to release the swap cache.
> >
> > - swap_dup_entry_direct() - Increase the swap count directly.
> >   Context: The caller must lock whatever is referencing the entries to
> >   avoid race, and the entries must already have a swap count > 1.
> >
> >   Typically, forking will need to copy the page table and hence needs to
> >   increase the swap count of the entries in the table. The page table is
> >   locked while referencing the swap entries, so the entries all have a
> >   swap count > 1 and can't be freed.
> >
> > Hibernation subsystem is a bit different, so two special wrappers are here:
> >
> > - swap_alloc_hibernation_slot() - Allocate one entry from one device.
> > - swap_free_hibernation_slot() - Free one entry allocated by the above
> > helper.
>
> During the code review, I found something to be verified.
> It is not directly releavant your patch,
> I send the email for checking it right and possible fix on this patch.
>
> on the swap_alloc_hibernation_slot function
> nr_swap_pages is decreased. but as I think it is decreased on swap_range_alloc.
>
> The nr_swap_pages are decremented as the callflow as like the below.
>
> cluster_alloc_swap_entry -> alloc_swap_scan_cluster
> -> closter_alloc_range -> swap_range_alloc
>
> Introduced on
> 4f78252da887ee7e9d1875dd6e07d9baa936c04f
> mm: swap: move nr_swap_pages counter decrement  from folio_alloc_swap() to swap_range_alloc()
>

Yeah, you are right, that's a bug introduced by 4f78252da887, will you
send a patch to fix that ? Or I can send one, just remove the
atomic_long_dec(&nr_swap_pages) in get_swap_page_of_type then we are
fine.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ