lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260119104739.439799-2-vishalc@linux.ibm.com>
Date: Mon, 19 Jan 2026 16:17:40 +0530
From: Vishal Chourasia <vishalc@...ux.ibm.com>
To: peterz@...radead.org
Cc: boqun.feng@...il.com, frederic@...nel.org, joelagnelf@...dia.com,
        josh@...htriplett.org, linux-kernel@...r.kernel.org,
        neeraj.upadhyay@...nel.org, paulmck@...nel.org, rcu@...r.kernel.org,
        rostedt@...dmis.org, srikar@...ux.ibm.com, sshegde@...ux.ibm.com,
        tglx@...utronix.de, urezki@...il.com, samir@...ux.ibm.com,
        vishalc@...ux.ibm.com
Subject: [PATCH] cpuhp: Expedite synchronize_rcu during SMT switch

Expedite synchronize_rcu() during the cpuhp_smt_[enable|disable] path to
accelerate the operation.

Bulk CPU hotplug operations—such as switching SMT modes across all
cores—require hotplugging multiple CPUs in rapid succession. On large
systems, this process takes significant time, increasing as the number
of CPUs to hotplug during SMT switch grows, leading to substantial
delays on high-core-count machines. Analysis [1] reveals that the
majority of this time is spent waiting for synchronize_rcu().

SMT switch is a user-initiated administrative task, it should complete
as quickly as possible.

Performance data on a PPC64 system with 2048 CPUs:

+ ppc64_cpu --smt=1 (SMT8 to SMT1)
Before: real 30m53.194s
After:  real 6m4.678s  # ~5x improvement

+ ppc64_cpu --smt=8 (SMT1 to SMT8)
Before: real 49m5.920s
After:  real 36m47.798s  # ~1.3x improvement

[1] https://lore.kernel.org/all/5f2ab8a44d685701fe36cdaa8042a1aef215d10d.camel@linux.vnet.ibm.com

Signed-off-by: Vishal Chourasia <vishalc@...ux.ibm.com>
Tested-by: Samir M <samir@...ux.ibm.com>

---
 include/linux/rcupdate.h | 3 +++
 kernel/cpu.c             | 4 ++++
 2 files changed, 7 insertions(+)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index c5b30054cd01..03c06cfb2b6d 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -1192,6 +1192,9 @@ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
 extern int rcu_expedited;
 extern int rcu_normal;
 
+extern void rcu_expedite_gp(void);
+extern void rcu_unexpedite_gp(void);
+
 DEFINE_LOCK_GUARD_0(rcu,
 	do {
 		rcu_read_lock();
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 8df2d773fe3b..a264d7170842 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2669,6 +2669,7 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 	int cpu, ret = 0;
 
 	cpu_maps_update_begin();
+	rcu_expedite_gp();
 	for_each_online_cpu(cpu) {
 		if (topology_is_primary_thread(cpu))
 			continue;
@@ -2698,6 +2699,7 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 	}
 	if (!ret)
 		cpu_smt_control = ctrlval;
+	rcu_unexpedite_gp();
 	cpu_maps_update_done();
 	return ret;
 }
@@ -2716,6 +2718,7 @@ int cpuhp_smt_enable(void)
 
 	cpu_maps_update_begin();
 	cpu_smt_control = CPU_SMT_ENABLED;
+	rcu_expedite_gp();
 	for_each_present_cpu(cpu) {
 		/* Skip online CPUs and CPUs on offline nodes */
 		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
@@ -2728,6 +2731,7 @@ int cpuhp_smt_enable(void)
 		/* See comment in cpuhp_smt_disable() */
 		cpuhp_online_cpu_device(cpu);
 	}
+	rcu_unexpedite_gp();
 	cpu_maps_update_done();
 	return ret;
 }
-- 
2.52.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ