lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWU21GJBx5WC0Gwv@linux.ibm.com>
Date: Mon, 12 Jan 2026 23:30:52 +0530
From: Vishal Chourasia <vishalc@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: rcu@...r.kernel.org, linux-kernel@...r.kernel.org, paulmck@...nel.org,
        frederic@...nel.org, neeraj.upadhyay@...nel.org, joelagnelf@...dia.com,
        josh@...htriplett.org, boqun.feng@...il.com, urezki@...il.com,
        rostedt@...dmis.org, tglx@...utronix.de, sshegde@...ux.ibm.com,
        srikar@...ux.ibm.com
Subject: Re: [PATCH] cpuhp: Expedite synchronize_rcu during CPU hotplug
 operations

Hello Peter,



On Mon, Jan 12, 2026 at 03:24:40PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 12, 2026 at 03:13:33PM +0530, Vishal Chourasia wrote:
> > Bulk CPU hotplug operations—such as switching SMT modes across all
> > cores—require hotplugging multiple CPUs in rapid succession. On large
> > systems, this process takes significant time, increasing as the number
> > of CPUs grows, leading to substantial delays on high-core-count
> > machines. Analysis [1] reveals that the majority of this time is spent
> > waiting for synchronize_rcu().
> > 
> > Expedite synchronize_rcu() during the hotplug path to accelerate the
> > operation. Since CPU hotplug is a user-initiated administrative task,
> > it should complete as quickly as possible.
> > 
> > Performance data on a PPC64 system with 400 CPUs:
> > 
> > + ppc64_cpu --smt=1 (SMT8 to SMT1)
> > Before: real 1m14.792s
> > After:  real 0m03.205s  # ~23x improvement
> > 
> > + ppc64_cpu --smt=8 (SMT1 to SMT8)
> > Before: real 2m27.695s
> > After:  real 0m02.510s  # ~58x improvement
> > 
> 
> But who cares? Its not like you'd *ever* do this, right?
Users dynamically adjust SMT modes to optimize performance of the
workload being run. And, yes it doesn't happen too often, but when it
does, on machines with (>= 1920 CPUs) it takes more than 20 mins to
finish.

- vishal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ