[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d9a6e626-96fb-0593-3410-3bed6c3fb3d0@intel.com>
Date: Fri, 2 Dec 2022 13:51:11 -0800
From: Reinette Chatre <reinette.chatre@...el.com>
To: "Yu, Fenghua" <fenghua.yu@...el.com>,
"Jiang, Dave" <dave.jiang@...el.com>,
"vkoul@...nel.org" <vkoul@...nel.org>,
"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] dmaengine: idxd: Do not call DMX TX callbacks during
workqueue disable
Hi Fenghua,
On 12/2/2022 1:12 PM, Yu, Fenghua wrote:
...
>> diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c index
>> b4d7bb923a40..2ac71a34fa34 100644
>> --- a/drivers/dma/idxd/device.c
>> +++ b/drivers/dma/idxd/device.c
>> @@ -1156,6 +1156,7 @@ int idxd_device_load_config(struct idxd_device *idxd)
>>
>> static void idxd_flush_pending_descs(struct idxd_irq_entry *ie) {
>> + struct dma_async_tx_descriptor *tx;
>
> Nitpicking. It's better to move this line to below:
>
>> struct idxd_desc *desc, *itr;
>> struct llist_node *head;
>> LIST_HEAD(flist);
>> @@ -1175,6 +1176,15 @@ static void idxd_flush_pending_descs(struct
>> idxd_irq_entry *ie)
>> list_for_each_entry_safe(desc, itr, &flist, list) {
>
> here?
> + struct dma_async_tx_descriptor *tx;
>
Will do.
>> list_del(&desc->list);
>> ctype = desc->completion->status ?
>> IDXD_COMPLETE_NORMAL : IDXD_COMPLETE_ABORT;
>> + /*
>> + * wq is being disabled. Any remaining descriptors are
>> + * likely to be stuck and can be dropped. callback could
>> + * point to code that is no longer accessible, for example
>> + * if dmatest module has been unloaded.
>> + */
>> + tx = &desc->txd;
>> + tx->callback = NULL;
>> + tx->callback_result = NULL;
>> idxd_dma_complete_txd(desc, ctype, true);
>> }
>> }
>> --
>> 2.34.1
>
> Reviewed-by: Fenghua Yu <fenghua.yu@...el.com>
Thank you very much.
Reinette
Powered by blists - more mailing lists