[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260210054312.303129-1-zhaoyang.huang@unisoc.com>
Date: Tue, 10 Feb 2026 13:43:12 +0800
From: "zhaoyang.huang" <zhaoyang.huang@...soc.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Yu Zhao <yuzhao@...gle.com>,
Michal Hocko <mhocko@...nel.org>, Rik van Riel <riel@...riel.com>,
Shakeel
Butt <shakeel.butt@...ux.dev>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Johannes Weiner <hannes@...xchg.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>,
Zhaoyang Huang <huangzhaoyang@...il.com>, <steve.kang@...soc.com>
Subject: [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec
From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
Nowadays, ANDROID system replaces madivse with memory.reclaim to implement
user space memory management which desires to reclaim a certain amount of
memcg's memory. However, oversized reclaiming and high latency are observed
as there is no limitation over nr_reclaimed inside try_to_shrink_lruvec
when MGLRU enabled. Besides, this could also affect all none root_reclaim
such as reclaim_high etc.
The commit 'b82b530740b9' ("mm: vmscan: restore incremental cgroup
iteration") introduces sc->memcg_full_walk to limit the walk range of
mem_cgroup_iter. This commit would like to make single memcg's scanning
more precised by judging if nr_reclaimed reached when sc->memcg_full_walk
not set.
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
---
mm/vmscan.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 670fe9fae5ba..03bda1094621 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4832,8 +4832,8 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
int i;
enum zone_watermarks mark;
- /* don't abort memcg reclaim to ensure fairness */
- if (!root_reclaim(sc))
+ /* don't abort full walk memcg reclaim to ensure fairness */
+ if (!root_reclaim(sc) && sc->memcg_full_walk)
return false;
if (sc->nr_reclaimed >= max(sc->nr_to_reclaim, compact_gap(sc->order)))
--
2.25.1
Powered by blists - more mailing lists