[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251224073032.161911-4-chenridong@huaweicloud.com>
Date: Wed, 24 Dec 2025 07:30:28 +0000
From: Chen Ridong <chenridong@...weicloud.com>
To: akpm@...ux-foundation.org,
axelrasmussen@...gle.com,
yuanchu@...gle.com,
weixugc@...gle.com,
david@...nel.org,
lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com,
vbabka@...e.cz,
rppt@...nel.org,
surenb@...gle.com,
mhocko@...e.com,
corbet@....net,
hannes@...xchg.org,
roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev,
muchun.song@...ux.dev,
zhengqi.arch@...edance.com,
mkoutny@...e.com
Cc: linux-mm@...ck.org,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org,
lujialin4@...wei.com,
chenridong@...weicloud.com
Subject: [PATCH -next v2 3/7] mm/mglru: make calls to flush_reclaim_state() similar for MGLRU and non-MGLRU
From: Chen Ridong <chenridong@...wei.com>
Currently, flush_reclaim_state is placed differently between
shrink_node_memcgs and shrink_many. shrink_many (only used for gen-LRU)
calls it after each lruvec is shrunk, while shrink_node_memcgs calls it
only after all lruvecs have been shrunk.
This patch moves flush_reclaim_state into shrink_node_memcgs and calls it
after each lruvec. This unifies the behavior and is reasonable because:
1. flush_reclaim_state adds current->reclaim_state->reclaimed to
sc->nr_reclaimed.
2. For non-MGLRU root reclaim, this can help stop the iteration earlier
when nr_to_reclaim is reached.
3. For non-root reclaim, the effect is negligible since flush_reclaim_state
does nothing in that case.
Signed-off-by: Chen Ridong <chenridong@...wei.com>
---
mm/vmscan.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8f4f9320e4c9..bbdcd4fcfd74 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -5822,6 +5822,8 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
sc->nr_scanned - scanned,
sc->nr_reclaimed - reclaimed);
+ flush_reclaim_state(sc);
+
/* If partial walks are allowed, bail once goal is reached */
if (partial && sc->nr_reclaimed >= sc->nr_to_reclaim) {
mem_cgroup_iter_break(target_memcg, memcg);
@@ -5854,8 +5856,6 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
shrink_node_memcgs(pgdat, sc);
- flush_reclaim_state(sc);
-
nr_node_reclaimed = sc->nr_reclaimed - nr_reclaimed;
/* Record the subtree's reclaim efficiency */
--
2.34.1
Powered by blists - more mailing lists