lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 28 Dec 2023 07:30:55 +0000
From: Shakeel Butt <shakeelb@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Yosry Ahmed <yosryahmed@...gle.com>, 
	Johannes Weiner <hannes@...xchg.org>, Yu Zhao <yuzhao@...gle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	Shakeel Butt <shakeelb@...gle.com>
Subject: [PATCH] mm: ratelimit stat flush from workingset shrinker

One of our internal workload regressed on newer upstream kernel and on
further investigation, it seems like the cause is the always synchronous
rstat flush in the count_shadow_nodes() added by the commit f82e6bf9bb9b
("mm: memcg: use rstat for non-hierarchical stats"). On further
inspection it seems like we don't really need accurate stats in this
function as it was already approximating the amount of appropriate
shadow entried to keep for maintaining the refault information. Since
there is already 2 sec periodic rstat flush, we don't need exact stats
here. Let's ratelimit the rstat flush in this code path.

Fixes: f82e6bf9bb9b ("mm: memcg: use rstat for non-hierarchical stats")
Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
---
 mm/workingset.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/workingset.c b/mm/workingset.c
index 2a2a34234df9..226012974328 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -680,7 +680,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
 		struct lruvec *lruvec;
 		int i;
 
-		mem_cgroup_flush_stats(sc->memcg);
+		mem_cgroup_flush_stats_ratelimited(sc->memcg);
 		lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
 		for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
 			pages += lruvec_page_state_local(lruvec,
-- 
2.43.0.472.g3155946c3a-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ