[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260203112401.3889029-3-zhouchuyi@bytedance.com>
Date: Tue, 3 Feb 2026 19:23:52 +0800
From: "Chuyi Zhou" <zhouchuyi@...edance.com>
To: <tglx@...utronix.de>, <mingo@...hat.com>, <luto@...nel.org>,
<peterz@...radead.org>, <paulmck@...nel.org>, <muchun.song@...ux.dev>,
<bp@...en8.de>, <dave.hansen@...ux.intel.com>
Cc: <linux-kernel@...r.kernel.org>, "Chuyi Zhou" <zhouchuyi@...edance.com>
Subject: [PATCH 02/11] smp: Enable preemption early in smp_call_function_single
Now smp_call_function_single() disables preemption mainly for the following
reasons:
- To protect the per-cpu csd_data from concurrent modification by other
tasks on the current CPU in the !wait case. For the wait case,
synchronization is not a concern as on-stack csd is used.
- To prevent the remote online CPU from being offlined. Specifically, we
want to ensure that no new IPIs are queued after smpcfd_dying_cpu() has
finished.
Disabling preemption for the entire execution is unnecessary, especially
csd_lock_wait() part does not require preemption protection. This patch
enables preemption before csd_lock_wait() to reduce the preemption-disabled
critical section.
Signed-off-by: Chuyi Zhou <zhouchuyi@...edance.com>
---
kernel/smp.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/kernel/smp.c b/kernel/smp.c
index fc1f7a964616..0858553f3666 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -685,11 +685,24 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
err = generic_exec_single(cpu, csd);
+ /*
+ * We may block in csd_lock_wait() for a significant amount of time (e.g., if the
+ * remote CPU has interrupts disabled). Disabling preemption throughout the entire
+ * smp_call_function_single() impacts the scheduling latency and is unnecessary.
+ *
+ * - Preemption must be disabled before sending the IPI to ensure no new IPIs are
+ * queued after smpcfd_dying_cpu() finishes.
+ *
+ * @csd is stack-allocated when @wait is true. No concurrent access except
+ * from the IPI completion path, so we can re-enable preemption early
+ * to reduce latency.
+ *
+ */
+ put_cpu();
+
if (wait)
csd_lock_wait(csd);
- put_cpu();
-
return err;
}
EXPORT_SYMBOL(smp_call_function_single);
--
2.20.1
Powered by blists - more mailing lists