[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170907232542.20589-7-paul.burton@imgtec.com>
Date: Thu, 7 Sep 2017 16:25:39 -0700
From: Paul Burton <paul.burton@...tec.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ralf Baechle <ralf@...ux-mips.org>
CC: <dianders@...omium.org>, James Hogan <james.hogan@...tec.com>,
Brian Norris <briannorris@...omium.org>,
Jason Cooper <jason@...edaemon.net>,
<jeffy.chen@...k-chips.com>, Marc Zyngier <marc.zyngier@....com>,
<linux-kernel@...r.kernel.org>, <linux-mips@...ux-mips.org>,
<tfiga@...omium.org>, Paul Burton <paul.burton@...tec.com>
Subject: [RFC PATCH v1 6/9] MIPS: perf: percpu_devid interrupt support
The MIPS CPU performance counter overflow interrupt is really a percpu
interrupt, but up until now we have not used the percpu interrupt APIs
to configure & control it. In preparation for doing so, introduce
support for percpu_devid interrupts in the MIPS perf implementation.
We switch from using request_irq() to using either setup_irq() or
setup_percpu_irq() with an explicit struct irqaction such that we can
set the flags, handler & name for that struct irqaction once rather than
needing to duplicate them in calls to request_irq() and
request_percpu_irq().
The IRQF_NOAUTOEN flag is passed because percpu interrupts
automatically get IRQ_NOAUTOEN set by irq_set_percpu_devid_flags(). We
opt into accepting this behaviour & explicitly enable the interrupt in
mipspmu_enable() right after configuring the local performance counters.
Signed-off-by: Paul Burton <paul.burton@...tec.com>
Cc: James Hogan <james.hogan@...tec.com>
Cc: Jason Cooper <jason@...edaemon.net>
Cc: Marc Zyngier <marc.zyngier@....com>
Cc: Ralf Baechle <ralf@...ux-mips.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org
Cc: linux-mips@...ux-mips.org
---
arch/mips/kernel/perf_event_mipsxx.c | 30 +++++++++++++++++++-----------
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c
index cae36ca400e9..af7bae79dc51 100644
--- a/arch/mips/kernel/perf_event_mipsxx.c
+++ b/arch/mips/kernel/perf_event_mipsxx.c
@@ -514,6 +514,11 @@ static void mipspmu_enable(struct pmu *pmu)
write_unlock(&pmuint_rwlock);
#endif
resume_local_counters();
+
+ if (irq_is_percpu_devid(mipspmu.irq))
+ enable_percpu_irq(mipspmu.irq, IRQ_TYPE_NONE);
+ else
+ enable_irq(mipspmu.irq);
}
/*
@@ -538,24 +543,27 @@ static void mipspmu_disable(struct pmu *pmu)
static atomic_t active_events = ATOMIC_INIT(0);
static DEFINE_MUTEX(pmu_reserve_mutex);
+static struct irqaction c0_perf_irqaction = {
+ .handler = mipsxx_pmu_handle_irq,
+ .flags = IRQF_PERCPU | IRQF_TIMER | IRQF_SHARED | IRQF_NOAUTOEN,
+ .name = "mips_perf_pmu",
+ .percpu_dev_id = &mipspmu,
+};
+
static int mipspmu_get_irq(void)
{
- int err;
+ if (irq_is_percpu_devid(mipspmu.irq))
+ return setup_percpu_irq(mipspmu.irq, &c0_perf_irqaction);
- err = request_irq(mipspmu.irq, mipsxx_pmu_handle_irq,
- IRQF_PERCPU | IRQF_NOBALANCING |
- IRQF_NO_THREAD | IRQF_NO_SUSPEND |
- IRQF_SHARED,
- "mips_perf_pmu", &mipspmu);
- if (err)
- pr_warn("Unable to request IRQ%d for MIPS performance counters!\n",
- mipspmu.irq);
- return err;
+ return setup_irq(mipspmu.irq, &c0_perf_irqaction);
}
static void mipspmu_free_irq(void)
{
- free_irq(mipspmu.irq, &mipspmu);
+ if (irq_is_percpu_devid(mipspmu.irq))
+ remove_percpu_irq(mipspmu.irq, &c0_perf_irqaction);
+ else
+ remove_irq(mipspmu.irq, &c0_perf_irqaction);
}
/*
--
2.14.1
Powered by blists - more mailing lists