[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251027004331.562345-3-den@valinux.co.jp>
Date: Mon, 27 Oct 2025 09:43:29 +0900
From: Koichiro Den <den@...inux.co.jp>
To: ntb@...ts.linux.dev,
linux-kernel@...r.kernel.org
Cc: jdmason@...zu.us,
dave.jiang@...el.com,
allenbh@...il.com
Subject: [PATCH 2/4] NTB: ntb_transport: Ack DMA memcpy descriptors to avoid wait-list growth
ntb_transport prepares DMA memcpy transactions but never acks the
descriptors afterwards. On dmaengines that honor the ACK semantics
(e.g. rcar-dmac), completed descriptors are moved to the 'wait' list
and only recycled once they are ACK-ed. Since ntb_transport does not
chain, inspect residue, or retain transaction descriptors after
completion, we can mark them as ACK-ed at prep time.
Set DMA_CTRL_ACK when preparing RX/TX memcpy transfers so that the
engine can immediately recycle descriptors after completion. This
prevents unbounded growth of the wait list, which was observed on R-Car
S4 (rcar-dmac). Engines that ignore ACK or auto-recycle descriptors are
unaffected.
Signed-off-by: Koichiro Den <den@...inux.co.jp>
---
drivers/ntb/ntb_transport.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
index b9f9d2e0feb3..a447eca27d0f 100644
--- a/drivers/ntb/ntb_transport.c
+++ b/drivers/ntb/ntb_transport.c
@@ -1591,7 +1591,7 @@ static int ntb_async_rx_submit(struct ntb_queue_entry *entry, void *offset)
txd = device->device_prep_dma_memcpy(chan, unmap->addr[1],
unmap->addr[0], len,
- DMA_PREP_INTERRUPT);
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!txd)
goto err_get_unmap;
@@ -1864,7 +1864,7 @@ static int ntb_async_tx_submit(struct ntb_transport_qp *qp,
unmap->to_cnt = 1;
txd = device->device_prep_dma_memcpy(chan, dest, unmap->addr[0], len,
- DMA_PREP_INTERRUPT);
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!txd)
goto err_get_unmap;
--
2.48.1
Powered by blists - more mailing lists