[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260121090620.559242-1-zhaoyang.huang@unisoc.com>
Date: Wed, 21 Jan 2026 17:06:20 +0800
From: "zhaoyang.huang" <zhaoyang.huang@...soc.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko
<mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>,
Zhaoyang Huang
<huangzhaoyang@...il.com>, <steve.kang@...soc.com>
Subject: [PATCH] mm: bail out when meet the goal of proactive memcg reclaim
From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
The proactive memcg reclaim defines the specific target mem cgroup
as well as a certain number of memories, which is different to
the kswapd and direct reclaim that need to keep the fairness among
cgroups. This commit would like to introduce a criteria to have
proactive reclaim bail out when target mem cgroup could meet the goal
via its own lruvec, which means the reclaim would also go through the
whole tree if the iter start on the desendants.
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
---
mm/vmscan.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 670fe9fae5ba..5dcca4559b18 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6028,8 +6028,15 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
sc->nr_scanned - scanned,
sc->nr_reclaimed - reclaimed);
- /* If partial walks are allowed, bail once goal is reached */
- if (partial && sc->nr_reclaimed >= sc->nr_to_reclaim) {
+ /* If partial walks are allowed, or proactive reclaim where
+ * the target memcg is clearly defined that could let us ignore
+ * the fairness thing, bail once goal is reached.
+ * note: for proactive reclaim, the criteria make sense only
+ * when target_memcg has both of descendant groups and folios
+ * charged. Other wise, walk the whole tree under target_memcg.
+ */
+ if ((partial || (sc->proactive && target_memcg == memcg)) &&
+ sc->nr_reclaimed >= sc->nr_to_reclaim) {
mem_cgroup_iter_break(target_memcg, memcg);
break;
}
--
2.25.1
Powered by blists - more mailing lists