[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230621092804.15120-6-yangyicong@huawei.com>
Date: Wed, 21 Jun 2023 17:28:04 +0800
From: Yicong Yang <yangyicong@...wei.com>
To: <mathieu.poirier@...aro.org>, <suzuki.poulose@....com>,
<jonathan.cameron@...wei.com>, <corbet@....net>,
<linux-kernel@...r.kernel.org>, <linux-doc@...r.kernel.org>
CC: <alexander.shishkin@...ux.intel.com>, <helgaas@...nel.org>,
<linux-pci@...r.kernel.org>, <prime.zeng@...wei.com>,
<linuxarm@...wei.com>, <yangyicong@...ilicon.com>,
<hejunhao3@...wei.com>
Subject: [PATCH v6 5/5] hwtracing: hisi_ptt: Fix potential sleep in atomic context
From: Yicong Yang <yangyicong@...ilicon.com>
We're using pci_irq_vector() to obtain the interrupt number and then
bind it to the CPU start perf under the protection of spinlock in
pmu::start(). pci_irq_vector() might sleep since [1] because it will
call msi_domain_get_virq() to get the MSI interrupt number and it
needs to acquire dev->msi.data->mutex. Getting a mutex will sleep on
contention. So use pci_irq_vector() in an atomic context is problematic.
This patch cached the interrupt number in the probe() and uses the
cached data instead to avoid potential sleep.
[1] commit 82ff8e6b78fc ("PCI/MSI: Use msi_get_virq() in pci_get_vector()")
Fixes: ff0de066b463 ("hwtracing: hisi_ptt: Add trace function support for HiSilicon PCIe Tune and Trace device")
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@...wei.com>
Signed-off-by: Yicong Yang <yangyicong@...ilicon.com>
---
drivers/hwtracing/ptt/hisi_ptt.c | 12 +++++-------
drivers/hwtracing/ptt/hisi_ptt.h | 2 ++
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/hwtracing/ptt/hisi_ptt.c b/drivers/hwtracing/ptt/hisi_ptt.c
index 103fb6b9bffb..ba081b6d2435 100644
--- a/drivers/hwtracing/ptt/hisi_ptt.c
+++ b/drivers/hwtracing/ptt/hisi_ptt.c
@@ -341,13 +341,13 @@ static int hisi_ptt_register_irq(struct hisi_ptt *hisi_ptt)
if (ret < 0)
return ret;
- ret = devm_request_threaded_irq(&pdev->dev,
- pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ),
+ hisi_ptt->trace_irq = pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ);
+ ret = devm_request_threaded_irq(&pdev->dev, hisi_ptt->trace_irq,
NULL, hisi_ptt_isr, 0,
DRV_NAME, hisi_ptt);
if (ret) {
pci_err(pdev, "failed to request irq %d, ret = %d\n",
- pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ), ret);
+ hisi_ptt->trace_irq, ret);
return ret;
}
@@ -1098,8 +1098,7 @@ static void hisi_ptt_pmu_start(struct perf_event *event, int flags)
* core in event_function_local(). If CPU passed is offline we'll fail
* here, just log it since we can do nothing here.
*/
- ret = irq_set_affinity(pci_irq_vector(hisi_ptt->pdev, HISI_PTT_TRACE_DMA_IRQ),
- cpumask_of(cpu));
+ ret = irq_set_affinity(hisi_ptt->trace_irq, cpumask_of(cpu));
if (ret)
dev_warn(dev, "failed to set the affinity of trace interrupt\n");
@@ -1394,8 +1393,7 @@ static int hisi_ptt_cpu_teardown(unsigned int cpu, struct hlist_node *node)
* Also make sure the interrupt bind to the migrated CPU as well. Warn
* the user on failure here.
*/
- if (irq_set_affinity(pci_irq_vector(hisi_ptt->pdev, HISI_PTT_TRACE_DMA_IRQ),
- cpumask_of(target)))
+ if (irq_set_affinity(hisi_ptt->trace_irq, cpumask_of(target)))
dev_warn(dev, "failed to set the affinity of trace interrupt\n");
hisi_ptt->trace_ctrl.on_cpu = target;
diff --git a/drivers/hwtracing/ptt/hisi_ptt.h b/drivers/hwtracing/ptt/hisi_ptt.h
index 164012dba4ec..e17f045d7e72 100644
--- a/drivers/hwtracing/ptt/hisi_ptt.h
+++ b/drivers/hwtracing/ptt/hisi_ptt.h
@@ -201,6 +201,7 @@ struct hisi_ptt_pmu_buf {
* @pdev: pci_dev of this PTT device
* @tune_lock: lock to serialize the tune process
* @pmu_lock: lock to serialize the perf process
+ * @trace_irq: interrupt number used by trace
* @upper_bdf: the upper BDF range of the PCI devices managed by this PTT device
* @lower_bdf: the lower BDF range of the PCI devices managed by this PTT device
* @port_filters: the filter list of root ports
@@ -221,6 +222,7 @@ struct hisi_ptt {
struct pci_dev *pdev;
struct mutex tune_lock;
spinlock_t pmu_lock;
+ int trace_irq;
u32 upper_bdf;
u32 lower_bdf;
--
2.24.0
Powered by blists - more mailing lists