[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250627190725.52969-1-sj@kernel.org>
Date: Fri, 27 Jun 2025 12:07:25 -0700
From: SeongJae Park <sj@...nel.org>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: SeongJae Park <sj@...nel.org>,
Davidlohr Bueso <dave@...olabs.net>,
akpm@...ux-foundation.org,
mhocko@...nel.org,
hannes@...xchg.org,
roman.gushchin@...ux.dev,
yosryahmed@...gle.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] mm: introduce per-node proactive reclaim interface
On Wed, 25 Jun 2025 16:10:16 -0700 Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> On Mon, Jun 23, 2025 at 11:58:51AM -0700, Davidlohr Bueso wrote:
> > This adds support for allowing proactive reclaim in general on a
> > NUMA system. A per-node interface extends support for beyond a
> > memcg-specific interface, respecting the current semantics of
> > memory.reclaim: respecting aging LRU and not supporting
> > artificially triggering eviction on nodes belonging to non-bottom
> > tiers.
> >
> > This patch allows userspace to do:
> >
> > echo "512M swappiness=10" > /sys/devices/system/node/nodeX/reclaim
[...]
> One orthogonal thought: I wonder if we want a unified aging (hotness or
> generation or active/inactive) view of jobs/memcgs/system. At the moment
> due to the way LRUs are implemented i.e. per-memcg per-node, we can have
> different view of these LRUs even for the same memcg. For example the
> hottest pages in low tier node might be colder than coldest pages in the
> top tier.
I think it would be nice to have, and DAMON could help.
DAMON can monitor access patterns on the entire physical address space and make
actions such as migrating pages to different nodes[1] or LRU-[de]activate
([anti-]aging)[2] for specific cgroups[3,4], based on the monitored access
pattern.
Such migrations and [anti-]aging would not conflict with page fault and memory
pressure based promotions and demotions, so could help existing tiering
solutions by running those together.
> Not sure how to implement it in a scalable way.
DAMON's monitoring overhead is designed to be not ruled by memory size, so
scalable in terms of memory size. We recently found it actually shows
reasonable monitoring results on an 1 TiB memory machine[5]. DAMON incurs
minimum overhead and limited to one CPU by default. If needed, it could also
scale out using multiple threads.
[1] https://lore.kernel.org/all/20250420194030.75838-1-sj@kernel.org
[2] https://lore.kernel.org/all/20220613192301.8817-1-sj@kernel.org
[3] https://lkml.kernel.org/r/20221205230830.144349-1-sj@kernel.org
[4] https://lore.kernel.org/20250619220023.24023-1-sj@kernel.org
[5] page 46, right side plot of
https://static.sched.com/hosted_files/ossna2025/16/damon_ossna25.pdf?_gl=1*12x1jv*_gcl_au*OTkyNjI0NTk0LjE3NTA4Nzg1Mzg.*FPAU*OTkyNjI0NTk0LjE3NTA4Nzg1Mzg.
Thanks,
SJ
Powered by blists - more mailing lists