lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7BNxt7FSkN4CmFz_USr-a36RARFDzuPLOJMjhqOriFQiA@mail.gmail.com>
Date: Mon, 17 Nov 2025 11:21:15 +0800
From: Kairui Song <ryncsn@...il.com>
To: linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>, Baoquan He <bhe@...hat.com>, 
	Barry Song <baohua@...nel.org>, Chris Li <chrisl@...nel.org>, Nhat Pham <nphamcs@...il.com>, 
	Yosry Ahmed <yosry.ahmed@...ux.dev>, David Hildenbrand <david@...nel.org>, 
	Johannes Weiner <hannes@...xchg.org>, Youngjun Park <youngjun.park@....com>, 
	Hugh Dickins <hughd@...gle.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>, 
	Ying Huang <ying.huang@...ux.alibaba.com>, Kemeng Shi <shikemeng@...weicloud.com>, 
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, 
	"Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 00/19] mm, swap: swap table phase II: unify swapin use
 swap cache and cleanup flags

On Mon, Nov 17, 2025 at 2:11 AM Kairui Song <ryncsn@...il.com> wrote:
>
> This series removes the SWP_SYNCHRONOUS_IO swap cache bypass swapin code and
> special swap flag bits including SWAP_HAS_CACHE, along with many historical
> issues. The performance is about ~20% better for some workloads, like
> Redis with persistence. This also cleans up the code to prepare for
> later phases, some patches are from a previously posted series.
>
> Swap cache bypassing and swap synchronization in general had many
> issues. Some are solved as workarounds, and some are still there [1]. To
> resolve them in a clean way, one good solution is to always use swap
> cache as the synchronization layer [2]. So we have to remove the swap
> cache bypass swap-in path first. It wasn't very doable due to
> performance issues, but now combined with the swap table, removing
> the swap cache bypass path will instead improve the performance,
> there is no reason to keep it.
>
> Now we can rework the swap entry and cache synchronization following
> the new design. Swap cache synchronization was heavily relying on
> SWAP_HAS_CACHE, which is the cause of many issues. By dropping the usage
> of special swap map bits and related workarounds, we get a cleaner code
> base and prepare for merging the swap count into the swap table in the
> next step.

A few things I forgot to mention about the following series, in phase
2 we removed the swap cache bypassing, unified the synchronization,
and removed special swap_map bits. Now swap_map is only used for swap
count, so in the next phase swap_map can be merged into the swap
table, which will clean up more things, and reduce memory usage (1
byte per slot).

Removal of swap_cgroup_ctrl is also doable, but need to be done after
we also simplify the allocation of swapin folios: always use the new
swap_cache_alloc_folio helper introduced in this series for folio
allocation so the folio accounting will also be managed by swap layer,
hence merging of the swap_cgroup_ctrl into swap table will be doable
by then.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ