[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhbjj2onfp.mognet@vschneid-thinkpadt14sgen2i.remote.csb>
Date: Fri, 09 Jan 2026 15:44:42 +0100
From: Valentin Schneider <vschneid@...hat.com>
To: Shrikanth Hegde <sshegde@...ux.ibm.com>, mingo@...nel.org,
peterz@...radead.org, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org
Cc: sshegde@...ux.ibm.com, kprateek.nayak@....com, juri.lelli@...hat.com,
tglx@...utronix.de, dietmar.eggemann@....com, anna-maria@...utronix.de,
frederic@...nel.org, wangyang.guo@...el.com
Subject: Re: [PATCH v3 3/3] sched/fair: Remove nohz.nr_cpus and use weight
of cpumask instead
On 07/01/26 12:21, Shrikanth Hegde wrote:
> nohz.nr_cpus was observed as contended cacheline when running
> enterprise workload on large systems.
>
> Fundamental scalability challenge with nohz.idle_cpus_mask
> and nohz.nr_cpus is the following:
>
> (1) nohz_balancer_kick() observes (reads) nohz.nr_cpus
> (or nohz.idle_cpu_mask) and nohz.has_blocked to see whether there's
> any nohz balancing work to do, in every scheduler tick.
>
> (2) nohz_balance_enter_idle() and nohz_balance_exit_idle()
> (through nohz_balancer_kick() via sched_tick()) modify (write)
> nohz.nr_cpus (and/or nohz.idle_cpu_mask) and nohz.has_blocked.
>
My first reaction on reading the whole changelog was: "but .nr_cpus and
.idle_cpus_mask are in the same cacheline?!", which as Ingo pointed out
somewhere down [1] isn't true for CPUMASK_OFFSTACK, so this change
effectively gets rid of the dirtying of one extra cacheline during idle
entry/exit.
[1]: http://lore.kernel.org/r/aS3za7X9BLS5rg65@gmail.com
I'd suggest adding something like so in this part of the changelog:
"""
Note that nohz.idle_cpus_mask and nohz.nr_cpus reside in the same
cacheline, however under CONFIG_CPUMASK_OFFSTACK the backing storage for
nohz.idle_cpus_mask will be elsewhere. This implies two separate cachelines
being dirtied upon idle entry / exit.
"""
Powered by blists - more mailing lists