[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251003150251.520624-3-ulf.hansson@linaro.org>
Date: Fri, 3 Oct 2025 17:02:44 +0200
From: Ulf Hansson <ulf.hansson@...aro.org>
To: "Rafael J . Wysocki" <rafael@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Maulik Shah <quic_mkshah@...cinc.com>,
Sudeep Holla <sudeep.holla@....com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-pm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Ulf Hansson <ulf.hansson@...aro.org>
Subject: [PATCH 2/3] arm64: smp: Implement cpus_has_pending_ipi()
To add support for keeping track of whether there may be a pending IPI
scheduled for a CPU or a group of CPUs, let's implement
cpus_has_pending_ipi() for arm64.
Note, the implementation is intentionally lightweight and doesn't use any
additional lock. This is good enough for cpuidle based decisions.
Signed-off-by: Ulf Hansson <ulf.hansson@...aro.org>
---
arch/arm64/kernel/smp.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 68cea3a4a35c..dd1acfa91d44 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -55,6 +55,8 @@
#include <trace/events/ipi.h>
+static DEFINE_PER_CPU(bool, pending_ipi);
+
/*
* as from 2.5, kernels no longer have an init_tasks structure
* so we need some other way of telling a new secondary core
@@ -1012,6 +1014,8 @@ static void do_handle_IPI(int ipinr)
if ((unsigned)ipinr < NR_IPI)
trace_ipi_exit(ipi_types[ipinr]);
+
+ per_cpu(pending_ipi, cpu) = false;
}
static irqreturn_t ipi_handler(int irq, void *data)
@@ -1024,10 +1028,26 @@ static irqreturn_t ipi_handler(int irq, void *data)
static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, target)
+ per_cpu(pending_ipi, cpu) = true;
+
trace_ipi_raise(target, ipi_types[ipinr]);
arm64_send_ipi(target, ipinr);
}
+bool cpus_has_pending_ipi(const struct cpumask *mask)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, mask) {
+ if (per_cpu(pending_ipi, cpu))
+ return true;
+ }
+ return false;
+}
+
static bool ipi_should_be_nmi(enum ipi_msg_type ipi)
{
if (!system_uses_irq_prio_masking())
--
2.43.0
Powered by blists - more mailing lists