[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <176158267407.2601451.3943491086976905751.tip-bot2@tip-bot2>
Date: Mon, 27 Oct 2025 16:31:14 -0000
From: "tip-bot2 for Marc Zyngier" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Marc Zyngier <maz@...nel.org>, Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, Jinjie Ruan <ruanjinjie@...wei.com>,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: irq/core] perf: arm_pmu: Convert to the new interrupt affinity
retrieval API
The following commit has been merged into the irq/core branch of tip:
Commit-ID: 663783e0013e97e18cc167139ab4319bbeaea399
Gitweb: https://git.kernel.org/tip/663783e0013e97e18cc167139ab4319bbeaea399
Author: Marc Zyngier <maz@...nel.org>
AuthorDate: Mon, 20 Oct 2025 13:29:25 +01:00
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Mon, 27 Oct 2025 17:16:33 +01:00
perf: arm_pmu: Convert to the new interrupt affinity retrieval API
Now that the relevant interrupt controllers are equipped with a callback
returning the affinity of per-CPU interrupts, switch the OF side of the ARM
PMU driver over to this new method.
Signed-off-by: Marc Zyngier <maz@...nel.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Tested-by: Will Deacon <will@...nel.org>
Reviewed-by: Jinjie Ruan <ruanjinjie@...wei.com>
Link: https://patch.msgid.link/20251020122944.3074811-9-maz@kernel.org
---
drivers/perf/arm_pmu_platform.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c
index 118170a..9c0494d 100644
--- a/drivers/perf/arm_pmu_platform.c
+++ b/drivers/perf/arm_pmu_platform.c
@@ -42,14 +42,13 @@ static int probe_current_pmu(struct arm_pmu *pmu,
return ret;
}
-static int pmu_parse_percpu_irq(struct arm_pmu *pmu, int irq)
+static int pmu_parse_percpu_irq(struct arm_pmu *pmu, int irq,
+ const struct cpumask *affinity)
{
- int cpu, ret;
struct pmu_hw_events __percpu *hw_events = pmu->hw_events;
+ int cpu;
- ret = irq_get_percpu_devid_partition(irq, &pmu->supported_cpus);
- if (ret)
- return ret;
+ cpumask_copy(&pmu->supported_cpus, affinity);
for_each_cpu(cpu, &pmu->supported_cpus)
per_cpu(hw_events->irq, cpu) = irq;
@@ -115,9 +114,12 @@ static int pmu_parse_irqs(struct arm_pmu *pmu)
}
if (num_irqs == 1) {
- int irq = platform_get_irq(pdev, 0);
+ const struct cpumask *affinity;
+ int irq;
+
+ irq = platform_get_irq_affinity(pdev, 0, &affinity);
if ((irq > 0) && irq_is_percpu_devid(irq))
- return pmu_parse_percpu_irq(pmu, irq);
+ return pmu_parse_percpu_irq(pmu, irq, affinity);
}
if (nr_cpu_ids != 1 && !pmu_has_irq_affinity(dev->of_node))
Powered by blists - more mailing lists