[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251029084725.GC988547@noisy.programming.kicks-ass.net>
Date: Wed, 29 Oct 2025 09:47:25 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Ingo Molnar <mingo@...nel.org>, Chen Yu <yu.c.chen@...el.com>,
	Doug Nelson <doug.nelson@...el.com>,
	Mohini Narkhede <mohini.narkhede@...el.com>,
	linux-kernel@...r.kernel.org,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Shrikanth Hegde <sshegde@...ux.ibm.com>,
	K Prateek Nayak <kprateek.nayak@....com>
Subject: Re: [PATCH v2] sched/fair: Skip sched_balance_running cmpxchg when
 balance is not due
On Tue, Oct 28, 2025 at 01:23:30PM -0700, Tim Chen wrote:
> The NUMA sched domain sets the SD_SERIALIZE flag by default, allowing
> only one NUMA load balancing operation to run system-wide at a time.
> 
> Currently, each MC group leader in a NUMA domain attempts to acquire
> the global sched_balance_running flag via cmpxchg() before checking
> whether load balancing is due or whether it is the designated leader for
> that NUMA domain. On systems with a large number of cores, this causes
> significant cache contention on the shared sched_balance_running flag.
> 
> This patch reduces unnecessary cmpxchg() operations by first checking
> whether the balance interval has expired. If load balancing is not due,
> the attempt to acquire sched_balance_running is skipped entirely.
> 
> On a 2-socket Granite Rapids system with sub-NUMA clustering enabled,
> running an OLTP workload, 7.8% of total CPU cycles were previously spent
> in sched_balance_domain() contending on sched_balance_running before
> this change.
> 
>          : 104              static __always_inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
>          : 105              {
>          : 106              return arch_cmpxchg(&v->counter, old, new);
>     0.00 :   ffffffff81326e6c:       xor    %eax,%eax
>     0.00 :   ffffffff81326e6e:       mov    $0x1,%ecx
>     0.00 :   ffffffff81326e73:       lock cmpxchg %ecx,0x2394195(%rip)        # ffffffff836bb010 <sched_balance_running>
>          : 110              sched_balance_domains():
>          : 12234            if (atomic_cmpxchg_acquire(&sched_balance_running, 0, 1))
>    99.39 :   ffffffff81326e7b:       test   %eax,%eax
>     0.00 :   ffffffff81326e7d:       jne    ffffffff81326e99 <sched_balance_domains+0x209>
>          : 12238            if (time_after_eq(jiffies, sd->last_balance + interval)) {
>     0.00 :   ffffffff81326e7f:       mov    0x14e2b3a(%rip),%rax        # ffffffff828099c0 <jiffies_64>
>     0.00 :   ffffffff81326e86:       sub    0x48(%r14),%rax
>     0.00 :   ffffffff81326e8a:       cmp    %rdx,%rax
> 
> After applying this fix, sched_balance_domain() is gone from
> the profile and there is a 8% throughput improvement.
> 
this..
> v2:
> 1. Rearrange the patch to get rid of an indent level per Peter's
>    suggestion.
> 2. Updated the data from new run by OLTP team.
> 
> link to v1: https://lore.kernel.org/lkml/e27d5dcb724fe46acc24ff44670bc4bb5be21d98.1759445926.git.tim.c.chen@linux.intel.com/
... stuff goes under the '---' sign.
Also, what happened to my other suggestion:
  https://lkml.kernel.org/r/20251014092436.GK4067720@noisy.programming.kicks-ass.net
? That seemed like a better place to put things.
Powered by blists - more mailing lists
 
