[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2de2850c-c844-4a75-884a-18d552fcb846@redhat.com>
Date: Fri, 21 Jun 2024 12:08:00 -0400
From: Waiman Long <longman@...hat.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>, tj@...nel.org,
cgroups@...r.kernel.org, yosryahmed@...gle.com, shakeel.butt@...ux.dev
Cc: hannes@...xchg.org, lizefan.x@...edance.com, kernel-team@...udflare.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] cgroup/rstat: Avoid thundering herd problem by kswapd
across NUMA nodes
On 6/21/24 10:32, Jesper Dangaard Brouer wrote:
> Avoid lock contention on the global cgroup rstat lock caused by kswapd
> starting on all NUMA nodes simultaneously. At Cloudflare, we observed
> massive issues due to kswapd and the specific mem_cgroup_flush_stats()
> call inlined in shrink_node, which takes the rstat lock.
>
> On our 12 NUMA node machines, each with a kswapd kthread per NUMA node,
> we noted severe lock contention on the rstat lock. This contention
> causes 12 CPUs to waste cycles spinning every time kswapd runs.
> Fleet-wide stats (/proc/N/schedstat) for kthreads revealed that we are
> burning an average of 20,000 CPU cores fleet-wide on kswapd, primarily
> due to spinning on the rstat lock.
>
> To help reviewer follow code: When the Per-CPU-Pages (PCP) freelist is
> empty, __alloc_pages_slowpath calls wake_all_kswapds(), causing all
> kswapdN threads to wake up simultaneously. The kswapd thread invokes
> shrink_node (via balance_pgdat) triggering the cgroup rstat flush
> operation as part of its work. This results in kernel self-induced rstat
> lock contention by waking up all kswapd threads simultaneously.
> Leveraging this detail: balance_pgdat() have NULL value in
> target_mem_cgroup, this cause mem_cgroup_flush_stats() to do flush with
> root_mem_cgroup.
>
> To resolve the kswapd issue, we generalized the "stats_flush_ongoing"
> concept to apply to all users of cgroup rstat, not just memcg. This
> concept was originally reverted in commit 7d7ef0a4686a ("mm: memcg:
> restore subtree stats flushing"). If there is an ongoing rstat flush,
> limited to the root cgroup, the flush is skipped. This is effective as
> kswapd operates on the root tree, sufficiently mitigating the thundering
> herd problem.
>
> This lowers contention on the global rstat lock, although limited to the
> root cgroup. Flushing cgroup subtree's can still lead to lock contention.
>
> Fixes: 7d7ef0a4686a ("mm: memcg: restore subtree stats flushing").
> Signed-off-by: Jesper Dangaard Brouer <hawk@...nel.org>
> ---
> include/linux/cgroup.h | 5 +++++
> kernel/cgroup/rstat.c | 28 ++++++++++++++++++++++++++++
> 2 files changed, 33 insertions(+)
>
> diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
> index 2150ca60394b..ad41cca5c3b6 100644
> --- a/include/linux/cgroup.h
> +++ b/include/linux/cgroup.h
> @@ -499,6 +499,11 @@ static inline struct cgroup *cgroup_parent(struct cgroup *cgrp)
> return NULL;
> }
>
> +static inline bool cgroup_is_root(struct cgroup *cgrp)
> +{
> + return cgroup_parent(cgrp) == NULL;
> +}
> +
> /**
> * cgroup_is_descendant - test ancestry
> * @cgrp: the cgroup to be tested
> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
> index fb8b49437573..5aba95e92d31 100644
> --- a/kernel/cgroup/rstat.c
> +++ b/kernel/cgroup/rstat.c
> @@ -11,6 +11,7 @@
>
> static DEFINE_SPINLOCK(cgroup_rstat_lock);
> static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock);
> +static atomic_t root_rstat_flush_ongoing = ATOMIC_INIT(0);
>
> static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu);
>
> @@ -350,8 +351,25 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp)
> {
> might_sleep();
>
> + /*
> + * This avoids thundering herd problem on global rstat lock. When an
> + * ongoing flush of the entire tree is in progress, then skip flush.
> + */
> + if (atomic_read(&root_rstat_flush_ongoing))
> + return;
> +
> + /* Grab right to be ongoing flusher, return if loosing race */
> + if (cgroup_is_root(cgrp) &&
> + atomic_xchg(&root_rstat_flush_ongoing, 1))
> + return;
> +
> __cgroup_rstat_lock(cgrp, -1);
> +
> cgroup_rstat_flush_locked(cgrp);
> +
> + if (cgroup_is_root(cgrp))
> + atomic_set(&root_rstat_flush_ongoing, 0);
> +
> __cgroup_rstat_unlock(cgrp, -1);
> }
>
> @@ -362,13 +380,20 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp)
> * Flush stats in @cgrp's subtree and prevent further flushes. Must be
> * paired with cgroup_rstat_flush_release().
> *
> + * Current invariant, not called with root cgrp.
> + *
> * This function may block.
> */
> void cgroup_rstat_flush_hold(struct cgroup *cgrp)
> __acquires(&cgroup_rstat_lock)
> {
> might_sleep();
> +
> __cgroup_rstat_lock(cgrp, -1);
> +
> + if (atomic_read(&root_rstat_flush_ongoing))
> + return;
> +
> cgroup_rstat_flush_locked(cgrp);
> }
>
> @@ -379,6 +404,9 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp)
> void cgroup_rstat_flush_release(struct cgroup *cgrp)
> __releases(&cgroup_rstat_lock)
> {
> + if (cgroup_is_root(cgrp))
> + atomic_set(&root_rstat_flush_ongoing, 0);
> +
> __cgroup_rstat_unlock(cgrp, -1);
> }
Since both cgroup_rstat_flush_hold() and cgroup_rstat_flush_release()
are not called with root cgroup, the cgroup_rstat_flush_hold() hunk is
essentially dead code.
Cheers,
Longman
Powered by blists - more mailing lists