[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c69c4d9-f154-4ad3-93c8-907fa4f98b27@huaweicloud.com>
Date: Tue, 16 Dec 2025 09:14:45 +0800
From: Chen Ridong <chenridong@...weicloud.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: akpm@...ux-foundation.org, axelrasmussen@...gle.com, yuanchu@...gle.com,
weixugc@...gle.com, david@...nel.org, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com,
mhocko@...e.com, corbet@....net, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev, zhengqi.arch@...edance.com,
linux-mm@...ck.org, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, lujialin4@...wei.com, zhongjinji@...or.com
Subject: Re: [PATCH -next 3/5] mm/mglru: extend shrink_one for both lrugen and
non-lrugen
On 2025/12/16 5:13, Johannes Weiner wrote:
> On Tue, Dec 09, 2025 at 01:25:55AM +0000, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@...wei.com>
>>
>> Currently, flush_reclaim_state is placed differently between
>> shrink_node_memcgs and shrink_many. shrink_many (only used for gen-LRU)
>> calls it after each lruvec is shrunk, while shrink_node_memcgs calls it
>> only after all lruvecs have been shrunk.
>>
>> This patch moves flush_reclaim_state into shrink_node_memcgs and calls it
>> after each lruvec. This unifies the behavior and is reasonable because:
>>
>> 1. flush_reclaim_state adds current->reclaim_state->reclaimed to
>> sc->nr_reclaimed.
>> 2. For non-MGLRU root reclaim, this can help stop the iteration earlier
>> when nr_to_reclaim is reached.
>> 3. For non-root reclaim, the effect is negligible since flush_reclaim_state
>> does nothing in that case.
>>
>> After moving flush_reclaim_state into shrink_node_memcgs, shrink_one can be
>> extended to support both lrugen and non-lrugen paths. It will call
>> try_to_shrink_lruvec for lrugen root reclaim and shrink_lruvec otherwise.
>>
>> Signed-off-by: Chen Ridong <chenridong@...wei.com>
>> ---
>> mm/vmscan.c | 57 +++++++++++++++++++++--------------------------------
>> 1 file changed, 23 insertions(+), 34 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 584f41eb4c14..795f5ebd9341 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -4758,23 +4758,7 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>> return nr_to_scan < 0;
>> }
>>
>> -static void shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>> -{
>> - unsigned long scanned = sc->nr_scanned;
>> - unsigned long reclaimed = sc->nr_reclaimed;
>> - struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>> - struct mem_cgroup *memcg = lruvec_memcg(lruvec);
>> -
>> - try_to_shrink_lruvec(lruvec, sc);
>> -
>> - shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority);
>> -
>> - if (!sc->proactive)
>> - vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
>> - sc->nr_reclaimed - reclaimed);
>> -
>> - flush_reclaim_state(sc);
>> -}
>> +static void shrink_one(struct lruvec *lruvec, struct scan_control *sc);
>>
>> static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
>> {
>> @@ -5760,6 +5744,27 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
>> return inactive_lru_pages > pages_for_compaction;
>> }
>>
>> +static void shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>> +{
>> + unsigned long scanned = sc->nr_scanned;
>> + unsigned long reclaimed = sc->nr_reclaimed;
>> + struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>> + struct mem_cgroup *memcg = lruvec_memcg(lruvec);
>> +
>> + if (lru_gen_enabled() && root_reclaim(sc))
>> + try_to_shrink_lruvec(lruvec, sc);
>> + else
>> + shrink_lruvec(lruvec, sc);
>
Hi Johannes, thank you for your reply.
> Yikes. So we end up with:
>
> shrink_node_memcgs()
> shrink_one()
> if lru_gen_enabled && root_reclaim(sc)
> try_to_shrink_lruvec(lruvec, sc)
> else
> shrink_lruvec()
> if lru_gen_enabled && !root_reclaim(sc)
> lru_gen_shrink_lruvec(lruvec, sc)
> try_to_shrink_lruvec()
>
> I think it's doing too much at once. Can you get it into the following
> shape:
>
You're absolutely right. This refactoring is indeed what patch 5/5 implements.
With patch 5/5 applied, the flow becomes:
shrink_node_memcgs()
shrink_one()
if lru_gen_enabled
lru_gen_shrink_lruvec --> symmetric with else shrink_lruvec()
if (root_reclaim(sc)) --> handle root reclaim.
try_to_shrink_lruvec()
else
...
try_to_shrink_lruvec()
else
shrink_lruvec()
This matches the structure you described.
One note: shrink_one() is also called from lru_gen_shrink_node() when memcg is disabled, so I
believe it makes sense to keep this helper.
> shrink_node_memcgs()
> for each memcg:
> if lru_gen_enabled:
> lru_gen_shrink_lruvec()
> else
> shrink_lruvec()
>
Regarding the patch split, I currently kept patch 3/5 and 5/5 separate to make the changes clearer
in each step. Would you prefer that I merge patch 3/5 with patch 5/5, so the full refactoring
appears in one patch?
Looking forward to your guidance.
> and handle the differences in those two functions? Then look for
> overlap one level down, and so forth.
--
Best regards,
Ridong
Powered by blists - more mailing lists