lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRhHsbh6ZtjCJ3wP@yjaykim-PowerEdge-T330>
Date: Sat, 15 Nov 2025 18:28:17 +0900
From: YoungJun Park <youngjun.park@....com>
To: Kairui Song <ryncsn@...il.com>
Cc: Baoquan He <bhe@...hat.com>, akpm@...ux-foundation.org,
	linux-mm@...ck.org, cgroups@...r.kernel.org,
	linux-kernel@...r.kernel.org, chrisl@...nel.org, hannes@...xchg.org,
	mhocko@...nel.org, roman.gushchin@...ux.dev, shakeel.butt@...ux.dev,
	muchun.song@...ux.dev, shikemeng@...weicloud.com, nphamcs@...il.com,
	baohua@...nel.org, gunho.lee@....com, taejoon.song@....com
Subject: Re: [PATCH 1/3] mm, swap: change back to use each swap device's
 percpu cluster

On Fri, Nov 14, 2025 at 11:52:25PM +0800, Kairui Song wrote:
> On Fri, Nov 14, 2025 at 9:05 AM Baoquan He <bhe@...hat.com> wrote:
> > On 11/13/25 at 08:45pm, YoungJun Park wrote:
> > > On Thu, Nov 13, 2025 at 02:07:59PM +0800, Kairui Song wrote:
> > > > On Sun, Nov 9, 2025 at 8:54 PM Youngjun Park <youngjun.park@....com> wrote:
> > > > >
> > > > > This reverts commit 1b7e90020eb7 ("mm, swap: use percpu cluster as
> > > > > allocation fast path").
> > > > >
> > > > > Because in the newly introduced swap tiers, the global percpu cluster
> > > > > will cause two issues:
> > > > > 1) it will cause caching oscillation in the same order of different si
> > > > >    if two different memcg can only be allowed to access different si and
> > > > >    both of them are swapping out.
> > > > > 2) It can cause priority inversion on swap devices. Imagine a case where
> > > > >    there are two memcg, say memcg1 and memcg2. Memcg1 can access si A, B
> > > > >    and A is higher priority device. While memcg2 can only access si B.
> > > > >    Then memcg 2 could write the global percpu cluster with si B, then
> > > > >    memcg1 take si B in fast path even though si A is not exhausted.
> > > > >
> > > > > Hence in order to support swap tier, revert commit 1b7e90020eb7 to use
> > > > > each swap device's percpu cluster.
> > > > >
> > > > > Co-developed-by: Baoquan He <bhe@...hat.com>
> > > > > Suggested-by: Kairui Song <kasong@...cent.com>
> > > > > Signed-off-by: Baoquan He <bhe@...hat.com>
> > > > > Signed-off-by: Youngjun Park <youngjun.park@....com>
> > > >
> > > > Hi Youngjun, Baoquan, Thanks for the work on the percpu cluster thing.
> > >
> > > Hello Kairui,
> 
> ...
> 
> > >
> > > Yeah... The rotation rule has indeed changed. I remember the
> > > discussion about rotation behavior:
> > > https://lore.kernel.org/linux-mm/aPc3lmbJEVTXoV6h@yjaykim-PowerEdge-T330/
> > >
> > > After that discussion, I've been thinking about the rotation.
> > > Currently, the requeue happens after every priority list traversal, and this logic
> > > is easily affected by changes.
> > > The rotation logic change behavior change is not not mentioned somtimes.
> > > (as you mentioned in commit 1b7e90020eb7).
> > >
> > > I'd like to share some ideas and hear your thoughts:
> > >
> > > 1. Getting rid of the same priority requeue rule
> > >    - same priority devices get priority - 1 or + 1 after requeue
> > >      (more add or remove as needed to handle any overlapping priority appropriately)
> > >
> > > 2. Requeue only when a new cluster is allocated
> > >    - Instead of requeueing after every priority list traversal, we
> > >      requeue only when a cluster is fully used
> > >    - This might have some performance impact, but the rotation behavior
> > >      would be similar to the existing one (though slightly different due
> > >      to synchronization and logic processing changes)
> >
> > 2) sounds better to me, and the logic and code change is simpler.
> >
> > Removing requeue may change behaviour. Swap devices of the same priority
> > should be round robin to take.
> 
> I agree. We definitely need balancing between devices of the same
> priority, cluster based rotation seems good enough.

Hello Kairui, Baoquan.
Thanks for your feedback. 

Okay I try to keep current rotation logic workable on next patch iteration.

Based on Kairui suggested previously,
We can keep the per-cpu si cache alive.
(However, since it could pick si from unselected tiers, it should
exist per tier - per cpu)

Or, following the current code structure, we could also consider,
Requeue while holding swap_avail_lock when the cluster is consumed.
 
> And I'm thinking if we can have a better rotation mechanism? Maybe
> plist isn't the best way to do rotation if we want to minimize the
> cost of rotation.

I did some more ideation.
(Although it is some workable way, next step idea. like I said just ideation )

I've been thinking about the inefficiencies with plist_requeue during
rotation, and the plist_for_each_entry traversal structure itself.
There is also small problem like it can be ended up selecting a lower priority swap device
while traversing the list, even when a higher priority swap device gets
inserted into the plist.

So anyway as I think... 

- On the read side (alloc_swap_entry), manage it so only one swap
  device can be obtained when selecting a swap device. (grabbing
  read_lock). swap selection logic does not any behavior affecting
  logic change like current approach. just see swapdevice only.

- On the write side, handle it appropriately using plist or some
  improved data structure. (grabbing write_lock)

- For rotation, instead of placing a plist per swap device, we could
  create something like a priority node. In this priority node
  structure, entries would be rotated each time a cluster is fully used.

- Also, with tiers introduced, since we only need to traverse the
  selected tier for each I/O, the current single swap_avail_list may
  not be suitable anymore. This could be changed to a per-tier
  structure.


Thanks,
YoungJun 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ