[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251215211357.GF905277@cmpxchg.org>
Date: Mon, 15 Dec 2025 16:13:57 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Chen Ridong <chenridong@...weicloud.com>
Cc: akpm@...ux-foundation.org, axelrasmussen@...gle.com, yuanchu@...gle.com,
weixugc@...gle.com, david@...nel.org, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org,
surenb@...gle.com, mhocko@...e.com, corbet@....net,
roman.gushchin@...ux.dev, shakeel.butt@...ux.dev,
muchun.song@...ux.dev, zhengqi.arch@...edance.com,
linux-mm@...ck.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
lujialin4@...wei.com, zhongjinji@...or.com
Subject: Re: [PATCH -next 3/5] mm/mglru: extend shrink_one for both lrugen
and non-lrugen
On Tue, Dec 09, 2025 at 01:25:55AM +0000, Chen Ridong wrote:
> From: Chen Ridong <chenridong@...wei.com>
>
> Currently, flush_reclaim_state is placed differently between
> shrink_node_memcgs and shrink_many. shrink_many (only used for gen-LRU)
> calls it after each lruvec is shrunk, while shrink_node_memcgs calls it
> only after all lruvecs have been shrunk.
>
> This patch moves flush_reclaim_state into shrink_node_memcgs and calls it
> after each lruvec. This unifies the behavior and is reasonable because:
>
> 1. flush_reclaim_state adds current->reclaim_state->reclaimed to
> sc->nr_reclaimed.
> 2. For non-MGLRU root reclaim, this can help stop the iteration earlier
> when nr_to_reclaim is reached.
> 3. For non-root reclaim, the effect is negligible since flush_reclaim_state
> does nothing in that case.
>
> After moving flush_reclaim_state into shrink_node_memcgs, shrink_one can be
> extended to support both lrugen and non-lrugen paths. It will call
> try_to_shrink_lruvec for lrugen root reclaim and shrink_lruvec otherwise.
>
> Signed-off-by: Chen Ridong <chenridong@...wei.com>
> ---
> mm/vmscan.c | 57 +++++++++++++++++++++--------------------------------
> 1 file changed, 23 insertions(+), 34 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 584f41eb4c14..795f5ebd9341 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4758,23 +4758,7 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> return nr_to_scan < 0;
> }
>
> -static void shrink_one(struct lruvec *lruvec, struct scan_control *sc)
> -{
> - unsigned long scanned = sc->nr_scanned;
> - unsigned long reclaimed = sc->nr_reclaimed;
> - struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> - struct mem_cgroup *memcg = lruvec_memcg(lruvec);
> -
> - try_to_shrink_lruvec(lruvec, sc);
> -
> - shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority);
> -
> - if (!sc->proactive)
> - vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
> - sc->nr_reclaimed - reclaimed);
> -
> - flush_reclaim_state(sc);
> -}
> +static void shrink_one(struct lruvec *lruvec, struct scan_control *sc);
>
> static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
> {
> @@ -5760,6 +5744,27 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
> return inactive_lru_pages > pages_for_compaction;
> }
>
> +static void shrink_one(struct lruvec *lruvec, struct scan_control *sc)
> +{
> + unsigned long scanned = sc->nr_scanned;
> + unsigned long reclaimed = sc->nr_reclaimed;
> + struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> + struct mem_cgroup *memcg = lruvec_memcg(lruvec);
> +
> + if (lru_gen_enabled() && root_reclaim(sc))
> + try_to_shrink_lruvec(lruvec, sc);
> + else
> + shrink_lruvec(lruvec, sc);
Yikes. So we end up with:
shrink_node_memcgs()
shrink_one()
if lru_gen_enabled && root_reclaim(sc)
try_to_shrink_lruvec(lruvec, sc)
else
shrink_lruvec()
if lru_gen_enabled && !root_reclaim(sc)
lru_gen_shrink_lruvec(lruvec, sc)
try_to_shrink_lruvec()
I think it's doing too much at once. Can you get it into the following
shape:
shrink_node_memcgs()
for each memcg:
if lru_gen_enabled:
lru_gen_shrink_lruvec()
else
shrink_lruvec()
and handle the differences in those two functions? Then look for
overlap one level down, and so forth.
Powered by blists - more mailing lists