[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aKgD7nZy7U+rHt9X@yjaykim-PowerEdge-T330>
Date: Fri, 22 Aug 2025 14:45:18 +0900
From: YoungJun Park <youngjun.park@....com>
To: Chris Li <chrisl@...nel.org>
Cc: Michal Koutný <mkoutny@...e.com>,
akpm@...ux-foundation.org, hannes@...xchg.org, mhocko@...nel.org,
roman.gushchin@...ux.dev, shakeel.butt@...ux.dev,
muchun.song@...ux.dev, shikemeng@...weicloud.com,
kasong@...cent.com, nphamcs@...il.com, bhe@...hat.com,
baohua@...nel.org, cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, gunho.lee@....com,
iamjoonsoo.kim@....com, taejoon.song@....com,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Kairui Song <ryncsn@...il.com>
Subject: Re: [PATCH 1/4] mm/swap, memcg: Introduce infrastructure for
cgroup-based swap priority
I still believe that the priority based approach has more flexibility,
and can cover more usage scenarios. That opinion has not changed.
However, from this discussion I came to clearly understand and agree on
three points:
1. The swap.tier idea can be implemented in a much simpler way, and
2. It can cover the most important use cases I initially needed, as well
as common performance scenarios, without causing LRU inversion.
3. The really really needed usage scenario of arbitrary ordering does not exist.
the usage scenario I suggest is imaginary.(just has possibility)
I have also considered the situation where I might need to revisit my
original idea in the future. I believe this would still be manageable
within the swap.tier framework. For example:
* If after swap.tier is merged, an arbitrate ordering use case arises
(which you do not consider concrete), it could be solved by allowing
cgroups to remap the tier order individually.
* If reviewers later decide to go back to the priority based direction,
I think it will still be possible. By then, much of the work would
already be done in patch v2, so switching back would not be
impossible.
And also, since I highly respect you for long-time contributions and
deep thinking in the swap layer, I decided to move the idea forward
based on swap.tier.
For now, I would like to share the first major direction change I am
considering, and get feedback on how to proceed. If you think this path
is promising, please advise whether I should continue as patch v2, or
send a new RFC series or new patch series.
-----------------------------------------------------------------------
1. Interface
-----------------------------------------------------------------------
In the initial thread you replied with the following examples:
> Here are a few examples:
> e.g. consider the following cgroup hierarchy a/b/c/d, a as the first
> level cgroup.
> a/swap.tiers: "- +compress_ram"
> it means who shall not be named is set to opt out, optin in
> compress_ram only, no ssd, no hard.
> Who shall not be named, if specified, has to be the first one listed
> in the "swap.tiers".
>
> a/b/swap.tiers: "+ssd"
> For b cgroup, who shall not be named is not specified, the tier is
> appended to the parent "a/swap.tiers". The effective "a/b/swap.tiers"
> become "- +compress_ram +ssd"
> a/b can use both zswap and ssd.
>
> Every time the who shall not be named is changed, it can drop the
> parent swap.tiers chain, starting from scratch.
>
> a/b/c/swap.tiers: "-"
>
> For c, it turns off all swap. The effective "a/b/c/swap.tiers" become
> "- +compress_ram +ssd -" which simplify as "-", because the second "-"
> overwrites all previous optin/optout results.
> In other words, if the current cgroup does not specify the who shall
> not be named, it will walk the parent chain until it does. The global
> "/" for non cgroup is on.
>
> a/b/c/d/swap.tiers: "- +hdd"
> For d, only hdd swap, nothing else.
>
> More example:
> "- +ssd +hdd -ssd" will simplify to: "- +hdd", which means hdd only.
> "+ -hdd": No hdd for you! Use everything else.
>
> Let me know what you think about the above "swap.tiers"(name TBD)
> proposal.
My opinion is that instead of mapping priority into named concepts, it
may be simpler to represent it as plain integers.
(The integers are assigned in sequential order, as explained in the following reply.)
This would make the interface almost identical to the cpuset style suggested by Koutný.
For example:
echo 1-8,9-10 > a/swap.tier # parent allows tier range 1–8 and 9-10
echo 1-4,9 > a/b/swap.tier # child uses tier 1-4 and 9 within parent's range
echo 20 > a/b/swap.tier # invalid: parent only allowed 1-8 and 9-10
named concepts can be dealt with by some userland based software solution.
kernel just gives simple integer mapping concept.
userland software can abstract it as a "named" tier to user.
Regarding the mapping of names to ranges, as you also mentioned:
> There is a simple mapping of global swap tier names into priority
> range
> The name itself is customizable.
> e.g. 100+ is the "compress_ram" tier. 50-99 is the "SSD" tier,
> 0-55 is the "hdd" tier.
> The detailed mechanization and API is TBD.
> The end result is a simple tier name lookup will get the priority
> range.
> By default all swap tiers are available for global usage without
> cgroup. That matches the current global swap on behavior.
One idea would be to provide a /proc/swaptier interface:
echo "100 40" > /proc/swaptier
This would mean:
* >=100 : tier 1
* 40–99 : tier 2
* <40 : tier 3
How do you feel about this approach?
-----------------------------------------------------------------------
2. NUMA autobind
-----------------------------------------------------------------------
If NUMA autobind is in use, perhaps it is best to simply disallow
swaptier settings. I expect workloads depending on autobind would rely
on it globally, rather than per-cgroup. Therefore, when a negative
priority is present, tier grouping could reject the configuration.
-----------------------------------------------------------------------
3. Implementation
-----------------------------------------------------------------------
My initial thought is to implement a simple bitmask check. That is, in
the slow swap path, check whether the cgroup has selected the given
tier. This is simple, but I worry it might lose the optimization of the
current priority list, where devices are dynamically tracked as they
become available or unavailable.
So perhaps a better design is to make swap tier an object, and have
each cgroup traverse only the priority list of the tiers it selected. I
would like feedback on whether this design makes sense.
-----------------------------------------------------------------------
Finally, I want to thank all reviewers for the constructive feedback.
Even if we move to the swap.tier approach, the reviews from Kairui, Nhat
Pham and Koutný are still valid and will remain relevant.
Kairui, Nhat Pham
* Regarding per-cgroup per-cluster feedback: this would likely need to
be adapted to tier-based design.
* Regarding passing percpu info along the allocation path: since tier is
selected per-cgroup, this may still be needed, depending on
implementation.
Koutný
* Regarding NUMA autobind complexity: as explained above, I intend to
design the mechanism so that autobind does not affect it. Parent-child
semantics will remain essentially identical to cpuset. If the proposed
interface is accepted, its usage would be like cpuset, which should be
less controversial.
---
Thank you again for the suggestions. I will continue to review while
waiting for your feedback.
Best Regards,
Youngjun Park
Powered by blists - more mailing lists