lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220411140348.30252-1-nick.graumann@gmail.com>
Date:   Mon, 11 Apr 2022 09:03:48 -0500
From:   Nicholas Graumann <nick.graumann@...il.com>
To:     Radhey Shyam Pandey <radhey.shyam.pandey@...inx.com>,
        Appana Durga Kedareswara rao <appana.durga.rao@...inx.com>,
        Harini Katakam <harini.katakam@...inx.com>,
        Vinod Koul <vkoul@...nel.org>,
        Michal Simek <michal.simek@...inx.com>,
        Kedareswara rao Appana <appanad@...inx.com>
Cc:     Nicholas Graumann <nick.graumann@...il.com>,
        dmaengine@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH v3] dmaengine: xilinx_dma: Free descriptor lists in order

If xilinx_dma_terminate_all is called while the AXI DMA is active, the
following error messages might be seen upon restarting the DMA:

[   72.556254] xilinx_dma_irq_handler: Channel d053cb5b has errors 10, cdr 2c049f80 tdr 2c04a000
[   72.557370] xilinx_dma_irq_handler: Channel d053cb5b has errors 100, cdr 2c049f80 tdr 2c049f80

>From then on the AXI DMA won't process any more descriptors until the
DMA channel is released and requested again.

The following sequence of events is what causes this to happen:

1. Some descriptors are prepared with xilinx_dma_tx_submit (so they get
   added to pending_list).
2. The DMA is kicked off via call to xilinx_dma_tx_submit (the
   descriptors are moved to active_list).
3. While the transfer is active, another descriptor is prepared with
   xilinx_dma_tx_submit (so it goes onto the pending_list)
4. Before the transfers complete, xilinx_dma_terminate_all is called.
   That function resets the channel then calls
   xilinx_dma_free_descriptors to free the descriptors.

At that point, pending_list contains a descriptor that is newer (and
thus farther down the chain of descriptors) than the descriptors
prepared in (1). However, it gets placed onto the free_seg_list before
the older descriptors. From then on, the next pointers are no longer
valid because the order of the descriptors in free_seg_list does not
match the order in which the descriptors were allocated.

To remedy this, the descriptor lists need to be freed in order from
oldest to newest, otherwise segments could be added to the free segment
list in a different order than they were created. This is not an issue
for VDMA nor CDMA because the driver does not maintain a list of
descriptors to free.

Fixes: c0bba3a99f07 ("dmaengine: vdma: Add Support for Xilinx AXI Direct Memory Access Engine")
Signed-off-by: Nicholas Graumann <nick.graumann@...il.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@...inx.com>
---
 drivers/dma/xilinx/xilinx_dma.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 3ff9fa3d8cd5..3b435449cd0c 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -884,9 +884,13 @@ static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
 
 	spin_lock_irqsave(&chan->lock, flags);
 
-	xilinx_dma_free_desc_list(chan, &chan->pending_list);
+	/*
+	 * Descriptor lists must be freed from oldest to newest so that the
+	 * order of free_seg_list is maintained.
+	 */
 	xilinx_dma_free_desc_list(chan, &chan->done_list);
 	xilinx_dma_free_desc_list(chan, &chan->active_list);
+	xilinx_dma_free_desc_list(chan, &chan->pending_list);
 
 	spin_unlock_irqrestore(&chan->lock, flags);
 }
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ