[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z9r70jKJLPdHyihM@google.com>
Date: Wed, 19 Mar 2025 17:16:02 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Greg Thelen <gthelen@...gle.com>
Cc: Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
Michal Koutný <mkoutny@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Eric Dumazet <edumzaet@...gle.com>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH] cgroup/rstat: avoid disabling irqs for O(num_cpu)
On Wed, Mar 19, 2025 at 12:13:30AM -0700, Greg Thelen wrote:
> From: Eric Dumazet <edumazet@...gle.com>
>
> cgroup_rstat_flush_locked() grabs the irq safe cgroup_rstat_lock while
> iterating all possible cpus. It only drops the lock if there is
> scheduler or spin lock contention. If neither, then interrupts can be
> disabled for a long time. On large machines this can disable interrupts
> for a long enough time to drop network packets. On 400+ CPU machines
> I've seen interrupt disabled for over 40 msec.
>
> Prevent rstat from disabling interrupts while processing all possible
> cpus. Instead drop and reacquire cgroup_rstat_lock for each cpu. This
> approach was previously discussed in
> https://lore.kernel.org/lkml/ZBz%2FV5a7%2F6PZeM7S@slm.duckdns.org/,
> though this was in the context of an non-irq rstat spin lock.
>
> Benchmark this change with:
> 1) a single stat_reader process with 400 threads, each reading a test
> memcg's memory.stat repeatedly for 10 seconds.
> 2) 400 memory hog processes running in the test memcg and repeatedly
> charging memory until oom killed. Then they repeat charging and oom
> killing.
>
> v6.14-rc6 with CONFIG_IRQSOFF_TRACER with stat_reader and hogs, finds
> interrupts are disabled by rstat for 45341 usec:
> # => started at: _raw_spin_lock_irq
> # => ended at: cgroup_rstat_flush
> #
> #
> # _------=> CPU#
> # / _-----=> irqs-off/BH-disabled
> # | / _----=> need-resched
> # || / _---=> hardirq/softirq
> # ||| / _--=> preempt-depth
> # |||| / _-=> migrate-disable
> # ||||| / delay
> # cmd pid |||||| time | caller
> # \ / |||||| \ | /
> stat_rea-96532 52d.... 0us*: _raw_spin_lock_irq
> stat_rea-96532 52d.... 45342us : cgroup_rstat_flush
> stat_rea-96532 52d.... 45342us : tracer_hardirqs_on <-cgroup_rstat_flush
> stat_rea-96532 52d.... 45343us : <stack trace>
> => memcg1_stat_format
> => memory_stat_format
> => memory_stat_show
> => seq_read_iter
> => vfs_read
> => ksys_read
> => do_syscall_64
> => entry_SYSCALL_64_after_hwframe
>
> With this patch the CONFIG_IRQSOFF_TRACER doesn't find rstat to be the
> longest holder. The longest irqs-off holder has irqs disabled for
> 4142 usec, a huge reduction from previous 45341 usec rstat finding.
>
> Running stat_reader memory.stat reader for 10 seconds:
> - without memory hogs: 9.84M accesses => 12.7M accesses
> - with memory hogs: 9.46M accesses => 11.1M accesses
> The throughput of memory.stat access improves.
>
> The mode of memory.stat access latency after grouping by of 2 buckets:
> - without memory hogs: 64 usec => 16 usec
> - with memory hogs: 64 usec => 8 usec
> The memory.stat latency improves.
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Signed-off-by: Greg Thelen <gthelen@...gle.com>
> Tested-by: Greg Thelen <gthelen@...gle.com>
> ---
> kernel/cgroup/rstat.c | 12 +++++-------
> 1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
> index aac91466279f..976c24b3671a 100644
> --- a/kernel/cgroup/rstat.c
> +++ b/kernel/cgroup/rstat.c
> @@ -323,13 +323,11 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp)
> rcu_read_unlock();
> }
>
> - /* play nice and yield if necessary */
> - if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) {
> - __cgroup_rstat_unlock(cgrp, cpu);
> - if (!cond_resched())
> - cpu_relax();
> - __cgroup_rstat_lock(cgrp, cpu);
> - }
> + /* play nice and avoid disabling interrupts for a long time */
> + __cgroup_rstat_unlock(cgrp, cpu);
> + if (!cond_resched())
> + cpu_relax();
> + __cgroup_rstat_lock(cgrp, cpu);
This patch looks good as-is, so feel free to add:
Reviewed-by: Yosry Ahmed <yosry.ahmed@...ux.dev>
That being said I think we can do further cleanups here now. We should
probably inline cgroup_rstat_flush_locked() into cgroup_rstat_flush(),
and move the lock hold and release into the loop.
cgroup_rstat_flush_hold() can simply call cgroup_rstat_flush() then hold
the lock.
Something like this on top:
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 976c24b3671a7..4f4b8d22555d7 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -299,17 +299,29 @@ static inline void __cgroup_rstat_unlock(struct cgroup *cgrp, int cpu_in_loop)
spin_unlock_irq(&cgroup_rstat_lock);
}
-/* see cgroup_rstat_flush() */
-static void cgroup_rstat_flush_locked(struct cgroup *cgrp)
- __releases(&cgroup_rstat_lock) __acquires(&cgroup_rstat_lock)
+/**
+ * cgroup_rstat_flush - flush stats in @cgrp's subtree
+ * @cgrp: target cgroup
+ *
+ * Collect all per-cpu stats in @cgrp's subtree into the global counters
+ * and propagate them upwards. After this function returns, all cgroups in
+ * the subtree have up-to-date ->stat.
+ *
+ * This also gets all cgroups in the subtree including @cgrp off the
+ * ->updated_children lists.
+ *
+ * This function may block.
+ */
+__bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp)
{
int cpu;
- lockdep_assert_held(&cgroup_rstat_lock);
-
+ might_sleep();
for_each_possible_cpu(cpu) {
struct cgroup *pos = cgroup_rstat_updated_list(cgrp, cpu);
+ /* Reacquire for every CPU to avoiding disabing IRQs too long */
+ __cgroup_rstat_lock(cgrp, cpu);
for (; pos; pos = pos->rstat_flush_next) {
struct cgroup_subsys_state *css;
@@ -322,37 +334,12 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp)
css->ss->css_rstat_flush(css, cpu);
rcu_read_unlock();
}
-
- /* play nice and avoid disabling interrupts for a long time */
__cgroup_rstat_unlock(cgrp, cpu);
if (!cond_resched())
cpu_relax();
- __cgroup_rstat_lock(cgrp, cpu);
}
}
-/**
- * cgroup_rstat_flush - flush stats in @cgrp's subtree
- * @cgrp: target cgroup
- *
- * Collect all per-cpu stats in @cgrp's subtree into the global counters
- * and propagate them upwards. After this function returns, all cgroups in
- * the subtree have up-to-date ->stat.
- *
- * This also gets all cgroups in the subtree including @cgrp off the
- * ->updated_children lists.
- *
- * This function may block.
- */
-__bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp)
-{
- might_sleep();
-
- __cgroup_rstat_lock(cgrp, -1);
- cgroup_rstat_flush_locked(cgrp);
- __cgroup_rstat_unlock(cgrp, -1);
-}
-
/**
* cgroup_rstat_flush_hold - flush stats in @cgrp's subtree and hold
* @cgrp: target cgroup
@@ -365,9 +352,8 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp)
void cgroup_rstat_flush_hold(struct cgroup *cgrp)
__acquires(&cgroup_rstat_lock)
{
- might_sleep();
+ cgroup_rstat_flush(cgrp);
__cgroup_rstat_lock(cgrp, -1);
- cgroup_rstat_flush_locked(cgrp);
}
/**
Powered by blists - more mailing lists