[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <044897a1-e6e1-b80a-e4cb-6b87423680fe@intel.com>
Date: Fri, 2 Dec 2022 11:45:38 -0700
From: Dave Jiang <dave.jiang@...el.com>
To: Reinette Chatre <reinette.chatre@...el.com>, fenghua.yu@...el.com,
vkoul@...nel.org, dmaengine@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] dmaengine: idxd: Do not call DMX TX callbacks during
workqueue disable
On 12/2/2022 11:25 AM, Reinette Chatre wrote:
> On driver unload any pending descriptors are flushed and pending
> DMA descriptors are explicitly completed:
> idxd_dmaengine_drv_remove() ->
> drv_disable_wq() ->
> idxd_wq_free_irq() ->
> idxd_flush_pending_descs() ->
> idxd_dma_complete_txd()
>
> With this done during driver unload any remaining descriptor is
> likely stuck and can be dropped. Even so, the descriptor may still
> have a callback set that could no longer be accessible. An
> example of such a problem is when the dmatest fails and the dmatest
> module is unloaded. The failure of dmatest leaves descriptors with
> dma_async_tx_descriptor::callback pointing to code that no longer
> exist. This causes a page fault as below at the time the IDXD driver
> is unloaded when it attempts to run the callback:
> BUG: unable to handle page fault for address: ffffffffc0665190
> #PF: supervisor instruction fetch in kernel mode
> #PF: error_code(0x0010) - not-present page
>
> Fix this by clearing the callback pointers on the transmit
> descriptors only when workqueue is disabled.
>
> Signed-off-by: Reinette Chatre <reinette.chatre@...el.com>
Reviewed-by: Dave Jiang <dave.jiang@...el.com>
> ---
>
> History of refactoring made the Fixes: hard to identify by me.
>
> drivers/dma/idxd/device.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
> index b4d7bb923a40..2ac71a34fa34 100644
> --- a/drivers/dma/idxd/device.c
> +++ b/drivers/dma/idxd/device.c
> @@ -1156,6 +1156,7 @@ int idxd_device_load_config(struct idxd_device *idxd)
>
> static void idxd_flush_pending_descs(struct idxd_irq_entry *ie)
> {
> + struct dma_async_tx_descriptor *tx;
> struct idxd_desc *desc, *itr;
> struct llist_node *head;
> LIST_HEAD(flist);
> @@ -1175,6 +1176,15 @@ static void idxd_flush_pending_descs(struct idxd_irq_entry *ie)
> list_for_each_entry_safe(desc, itr, &flist, list) {
> list_del(&desc->list);
> ctype = desc->completion->status ? IDXD_COMPLETE_NORMAL : IDXD_COMPLETE_ABORT;
> + /*
> + * wq is being disabled. Any remaining descriptors are
> + * likely to be stuck and can be dropped. callback could
> + * point to code that is no longer accessible, for example
> + * if dmatest module has been unloaded.
> + */
> + tx = &desc->txd;
> + tx->callback = NULL;
> + tx->callback_result = NULL;
> idxd_dma_complete_txd(desc, ctype, true);
> }
> }
Powered by blists - more mailing lists