lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aEvPBSObBrrQCsa3@yjaykim-PowerEdge-T330>
Date: Fri, 13 Jun 2025 16:11:01 +0900
From: YoungJun Park <youngjun.park@....com>
To: Nhat Pham <nphamcs@...il.com>
Cc: Kairui Song <ryncsn@...il.com>, linux-mm@...ck.org,
	akpm@...ux-foundation.org, hannes@...xchg.org, mhocko@...nel.org,
	roman.gushchin@...ux.dev, shakeel.butt@...ux.dev,
	cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
	shikemeng@...weicloud.com, bhe@...hat.com, baohua@...nel.org,
	chrisl@...nel.org, muchun.song@...ux.dev, iamjoonsoo.kim@....com,
	taejoon.song@....com, gunho.lee@....com
Subject: Re: [RFC PATCH 2/2] mm: swap: apply per cgroup swap priority
 mechansim on swap layer

On Thu, Jun 12, 2025 at 01:08:08PM -0700, Nhat Pham wrote:
> On Thu, Jun 12, 2025 at 11:20 AM Kairui Song <ryncsn@...il.com> wrote:
> >
> > On Fri, Jun 13, 2025 at 1:28 AM Nhat Pham <nphamcs@...il.com> wrote:
> > >
> > > On Thu, Jun 12, 2025 at 4:14 AM Kairui Song <ryncsn@...il.com> wrote:
> > > >
> > > > On Thu, Jun 12, 2025 at 6:43 PM <youngjun.park@....com> wrote:
> > > > >
> > > > > From: "youngjun.park" <youngjun.park@....com>
> > > > >
> > > >
> > > > Hi, Youngjun,
> > > >
> > > > Thanks for sharing this series.
> > > >
> > > > > This patch implements swap device selection and swap on/off propagation
> > > > > when a cgroup-specific swap priority is set.
> > > > >
> > > > > There is one workaround to this implementation as follows.
> > > > > Current per-cpu swap cluster enforces swap device selection based solely
> > > > > on CPU locality, overriding the swap cgroup's configured priorities.
> > > >
> > > > I've been thinking about this, we can switch to a per-cgroup-per-cpu
> > > > next cluster selector, the problem with current code is that swap
> > >
> > > What about per-cpu-per-order-per-swap-device :-? Number of swap
> > > devices is gonna be smaller than number of cgroups, right?
> >
> > Hi Nhat,
> >
> > The problem is per cgroup makes more sense (I was suggested to use
> > cgroup level locality at the very beginning of the implementation of
> > the allocator in the mail list, but it was hard to do so at that
> > time), for container environments, a cgroup is a container that runs
> > one type of workload, so it has its own locality. Things like systemd
> > also organize different desktop workloads into cgroups. The whole
> > point is about cgroup.
> 
> Yeah I know what cgroup represents. Which is why I mentioned in the
> next paragraph that are still making decisions based per-cgroup - we
> just organize the per-cpu cache based on swap devices. This way, two
> cgroups with similar/same priority list can share the clusters, for
> each swapfile, in each CPU. There will be a lot less duplication and
> overhead. And two cgroups with different priority lists won't
> interfere with each other, since they'll target different swapfiles.
> 
> Unless we want to nudge the swapfiles/clusters to be self-partitioned
> among the cgroups? :) IOW, each cluster contains pages mostly from a
> single cgroup (with some stranglers mixed in). I suppose that will be
> very useful for swap on rotational drives where read contiguity is
> imperative, but not sure about other backends :-? 
> Anyway, no strong opinions to be completely honest :) Was just
> throwing out some ideas. Per-cgroup-per-cpu-per-order sounds good to
> me too, if it's easy to do.

Good point!
I agree with the mention that self-partitioned clusters and duplicated priority.
One concern is the cost of synchronization.
Specifically the one incurred when accessing the prioritized swap device
>From a simple performance perspective, a per-cgroup-per-CPU implementation
seems favorable - in line with the current swap allocation fastpath.

It seems most reasonable to carefully compare the pros and cons of the           
tow approaches.

To summaraize,

Option 1. per-cgroup-per-cpu
Pros: upstream fit. performance. 
Cons: duplicate priority(some memory structure consumtion cost), 
self partioned cluster 

Option 2. per-cpu-per-order(per-device)
Pros: Cons of Option1
Cons: Pros of Option1

It's not easy to draw a definitive conclusion right away, 
I should also evaluate other pros and cons that may arise during actual 
implementation.
so I'd like to take some time to review things in more detail 
and share my thoughs and conclusions in the next patch series.

What do you think, Nhat and Kairui?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ