[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240615081257.3945587-1-shakeel.butt@linux.dev>
Date: Sat, 15 Jun 2024 01:12:57 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Yosry Ahmed <yosryahmed@...gle.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Yu Zhao <yuzhao@...gle.com>,
Muchun Song <songmuchun@...edance.com>,
Facebook Kernel Team <kernel-team@...a.com>,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] memcg: use ratelimited stats flush in the reclaim
The Meta prod is seeing large amount of stalls in memcg stats flush
from the memcg reclaim code path. At the moment, this specific callsite
is doing a synchronous memcg stats flush. The rstat flush is an
expensive and time consuming operation, so concurrent relaimers will
busywait on the lock potentially for a long time. Actually this issue is
not unique to Meta and has been observed by Cloudflare [1] as well. For
the Cloudflare case, the stalls were due to contention between kswapd
threads running on their 8 numa node machines which does not make sense
as rstat flush is global and flush from one kswapd thread should be
sufficient for all. Simply replace the synchronous flush with the
ratelimited one.
One may raise a concern on potentially using 2 sec stale (at worst)
stats for heuristics like desirable inactive:active ratio and preferring
inactive file pages over anon pages but these specific heuristics do not
require very precise stats and also are ignored under severe memory
pressure. This patch has been running on Meta fleet for more than a
month and we have not observed any issues. Please note that MGLRU is not
impacted by this issue at all as it avoids rstat flushing completely.
Link: https://lore.kernel.org/all/6ee2518b-81dd-4082-bdf5-322883895ffc@kernel.org [1]
Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c0429fd6c573..bda4f92eba71 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2263,7 +2263,7 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
* Flush the memory cgroup stats, so that we read accurate per-memcg
* lruvec stats for heuristics.
*/
- mem_cgroup_flush_stats(sc->target_mem_cgroup);
+ mem_cgroup_flush_stats_ratelimited(sc->target_mem_cgroup);
/*
* Determine the scan balance between anon and file LRUs.
--
2.43.0
Powered by blists - more mailing lists