lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 26 Jun 2024 23:18:27 +0200
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: tj@...nel.org, cgroups@...r.kernel.org, yosryahmed@...gle.com,
 shakeel.butt@...ux.dev
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, hannes@...xchg.org,
 lizefan.x@...edance.com, longman@...hat.com, kernel-team@...udflare.com,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH V3 2/2] cgroup/rstat: Avoid thundering herd problem by kswapd
 across NUMA nodes

Avoid lock contention on the global cgroup rstat lock caused by kswapd
starting on all NUMA nodes simultaneously. At Cloudflare, we observed
massive issues due to kswapd and the specific mem_cgroup_flush_stats()
call inlined in shrink_node, which takes the rstat lock.

On our 12 NUMA node machines, each with a kswapd kthread per NUMA node,
we noted severe lock contention on the rstat lock. This contention
causes 12 CPUs to waste cycles spinning every time kswapd runs.
Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are
burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily
due to spinning on the rstat lock.

To help reviewer follow code: When the Per-CPU-Pages (PCP) freelist is
empty, __alloc_pages_slowpath calls wake_all_kswapds(), causing all
kswapdN threads to wake up simultaneously. The kswapd thread invokes
shrink_node (via balance_pgdat) triggering the cgroup rstat flush
operation as part of its work. This results in kernel self-induced rstat
lock contention by waking up all kswapd threads simultaneously.
Leveraging this detail: balance_pgdat() have NULL value in
target_mem_cgroup, this cause mem_cgroup_flush_stats() to do flush with
root_mem_cgroup.

To avoid this kind of thundering herd problem, kernel previously had a
"stats_flush_ongoing" concept, but this was removed as part of commit
7d7ef0a4686a ("mm: memcg: restore subtree stats flushing"). This patch
reintroduce and generalized the concept to apply to all users of cgroup
rstat, not just memcg.

If there is an ongoing rstat flush, and current cgroup is a descendant,
then it is unnecessary to do the flush. For callers to still see updated
stats, wait for ongoing flusher to complete before returning, but add
timeout as stats are already inaccurate given updaters keeps running.

Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing").
Signed-off-by: Jesper Dangaard Brouer <hawk@...nel.org>
---
V2: https://lore.kernel.org/all/171923011608.1500238.3591002573732683639.stgit@firesoul/
V1: https://lore.kernel.org/all/171898037079.1222367.13467317484793748519.stgit@firesoul/
RFC: https://lore.kernel.org/all/171895533185.1084853.3033751561302228252.stgit@firesoul/

 kernel/cgroup/rstat.c |   61 ++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 50 insertions(+), 11 deletions(-)

diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 2a42be3a9bb3..f21e6b1109a4 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -2,6 +2,7 @@
 #include "cgroup-internal.h"
 
 #include <linux/sched/cputime.h>
+#include <linux/completion.h>
 
 #include <linux/bpf.h>
 #include <linux/btf.h>
@@ -11,6 +12,8 @@
 
 static DEFINE_SPINLOCK(cgroup_rstat_lock);
 static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock);
+static struct cgroup *cgrp_rstat_ongoing_flusher;
+static DECLARE_COMPLETION(cgrp_rstat_flusher_done);
 
 static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu);
 
@@ -346,6 +349,44 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp)
 	}
 }
 
+#define MAX_WAIT	msecs_to_jiffies(100)
+/* Trylock helper that also checks for on ongoing flusher */
+static bool cgroup_rstat_trylock_flusher(struct cgroup *cgrp)
+{
+retry:
+	bool locked = __cgroup_rstat_trylock(cgrp, -1);
+	if (!locked) {
+		struct cgroup *cgrp_ongoing;
+
+		/* Lock is contended, lets check if ongoing flusher is already
+		 * taking care of this, if we are a descendant.
+		 */
+		cgrp_ongoing = READ_ONCE(cgrp_rstat_ongoing_flusher);
+		if (!cgrp_ongoing)
+			goto retry;
+
+		if (cgroup_is_descendant(cgrp, cgrp_ongoing)) {
+			wait_for_completion_interruptible_timeout(
+				&cgrp_rstat_flusher_done, MAX_WAIT);
+
+			return false;
+		}
+		__cgroup_rstat_lock(cgrp, -1, false);
+	}
+	/* Obtained lock, record this cgrp as the ongoing flusher */
+	reinit_completion(&cgrp_rstat_flusher_done);
+	WRITE_ONCE(cgrp_rstat_ongoing_flusher, cgrp);
+
+	return true; /* locked */
+}
+
+static void cgroup_rstat_unlock_flusher(struct cgroup *cgrp)
+{
+	WRITE_ONCE(cgrp_rstat_ongoing_flusher, NULL);
+	complete_all(&cgrp_rstat_flusher_done);
+	__cgroup_rstat_unlock(cgrp, -1);
+}
+
 /**
  * cgroup_rstat_flush - flush stats in @cgrp's subtree
  * @cgrp: target cgroup
@@ -361,18 +402,13 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp)
  */
 __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp)
 {
-	bool locked;
-
 	might_sleep();
 
-	locked = __cgroup_rstat_trylock(cgrp, -1);
-	if (!locked) {
-		/* Opportunity to ongoing flush detection */
-		__cgroup_rstat_lock(cgrp, -1, false);
-	}
+	if (!cgroup_rstat_trylock_flusher(cgrp))
+		return;
 
 	cgroup_rstat_flush_locked(cgrp);
-	__cgroup_rstat_unlock(cgrp, -1);
+	cgroup_rstat_unlock_flusher(cgrp);
 }
 
 /**
@@ -388,8 +424,11 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp)
 	__acquires(&cgroup_rstat_lock)
 {
 	might_sleep();
-	__cgroup_rstat_lock(cgrp, -1, true);
-	cgroup_rstat_flush_locked(cgrp);
+
+	if (cgroup_rstat_trylock_flusher(cgrp))
+		cgroup_rstat_flush_locked(cgrp);
+	else
+		__cgroup_rstat_lock(cgrp, -1, true);
 }
 
 /**
@@ -399,7 +438,7 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp)
 void cgroup_rstat_flush_release(struct cgroup *cgrp)
 	__releases(&cgroup_rstat_lock)
 {
-	__cgroup_rstat_unlock(cgrp, -1);
+	cgroup_rstat_unlock_flusher(cgrp);
 }
 
 int cgroup_rstat_init(struct cgroup *cgrp)



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ