[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <93b5d144bfc16e0c0f640d5f7cfaeda6bf08753f.1670005163.git.reinette.chatre@intel.com>
Date: Fri, 2 Dec 2022 10:25:06 -0800
From: Reinette Chatre <reinette.chatre@...el.com>
To: fenghua.yu@...el.com, dave.jiang@...el.com, vkoul@...nel.org,
dmaengine@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH 3/3] dmaengine: idxd: Do not call DMX TX callbacks during workqueue disable
On driver unload any pending descriptors are flushed and pending
DMA descriptors are explicitly completed:
idxd_dmaengine_drv_remove() ->
drv_disable_wq() ->
idxd_wq_free_irq() ->
idxd_flush_pending_descs() ->
idxd_dma_complete_txd()
With this done during driver unload any remaining descriptor is
likely stuck and can be dropped. Even so, the descriptor may still
have a callback set that could no longer be accessible. An
example of such a problem is when the dmatest fails and the dmatest
module is unloaded. The failure of dmatest leaves descriptors with
dma_async_tx_descriptor::callback pointing to code that no longer
exist. This causes a page fault as below at the time the IDXD driver
is unloaded when it attempts to run the callback:
BUG: unable to handle page fault for address: ffffffffc0665190
#PF: supervisor instruction fetch in kernel mode
#PF: error_code(0x0010) - not-present page
Fix this by clearing the callback pointers on the transmit
descriptors only when workqueue is disabled.
Signed-off-by: Reinette Chatre <reinette.chatre@...el.com>
---
History of refactoring made the Fixes: hard to identify by me.
drivers/dma/idxd/device.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index b4d7bb923a40..2ac71a34fa34 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -1156,6 +1156,7 @@ int idxd_device_load_config(struct idxd_device *idxd)
static void idxd_flush_pending_descs(struct idxd_irq_entry *ie)
{
+ struct dma_async_tx_descriptor *tx;
struct idxd_desc *desc, *itr;
struct llist_node *head;
LIST_HEAD(flist);
@@ -1175,6 +1176,15 @@ static void idxd_flush_pending_descs(struct idxd_irq_entry *ie)
list_for_each_entry_safe(desc, itr, &flist, list) {
list_del(&desc->list);
ctype = desc->completion->status ? IDXD_COMPLETE_NORMAL : IDXD_COMPLETE_ABORT;
+ /*
+ * wq is being disabled. Any remaining descriptors are
+ * likely to be stuck and can be dropped. callback could
+ * point to code that is no longer accessible, for example
+ * if dmatest module has been unloaded.
+ */
+ tx = &desc->txd;
+ tx->callback = NULL;
+ tx->callback_result = NULL;
idxd_dma_complete_txd(desc, ctype, true);
}
}
--
2.34.1
Powered by blists - more mailing lists