lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
 <BL3PR12MB65714A495A20FE6C955D80E5C97BA@BL3PR12MB6571.namprd12.prod.outlook.com>
Date: Wed, 25 Jun 2025 06:14:29 +0000
From: "Gupta, Suraj" <Suraj.Gupta2@....com>
To: Folker Schwesinger <dev@...ker-schwesinger.de>,
	"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
	"linux-arm-kernel@...ts.infradead.org"
	<linux-arm-kernel@...ts.infradead.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>
CC: Vinod Koul <vkoul@...nel.org>, "Simek, Michal" <michal.simek@....com>,
	Jernej Skrabec <jernej.skrabec@...il.com>, Manivannan Sadhasivam
	<manivannan.sadhasivam@...aro.org>, Krzysztof Kozlowski
	<krzysztof.kozlowski@...aro.org>, Uwe Kleine-König
	<u.kleine-koenig@...libre.com>, Marek Vasut <marex@...x.de>, "Pandey, Radhey
 Shyam" <radhey.shyam.pandey@....com>
Subject: RE: [PATCH v2] dmaengine: xilinx_dma: Support descriptor setup from
 dma_vecs

[Public]

> -----Original Message-----
> From: Folker Schwesinger <dev@...ker-schwesinger.de>
> Sent: Thursday, June 19, 2025 12:03 PM
> To: dmaengine@...r.kernel.org; linux-arm-kernel@...ts.infradead.org; linux-
> kernel@...r.kernel.org
> Cc: Vinod Koul <vkoul@...nel.org>; Simek, Michal <michal.simek@....com>;
> Jernej Skrabec <jernej.skrabec@...il.com>; Manivannan Sadhasivam
> <manivannan.sadhasivam@...aro.org>; Krzysztof Kozlowski
> <krzysztof.kozlowski@...aro.org>; Uwe Kleine-König <u.kleine-
> koenig@...libre.com>; Marek Vasut <marex@...x.de>; Pandey, Radhey Shyam
> <radhey.shyam.pandey@....com>
> Subject: [PATCH v2] dmaengine: xilinx_dma: Support descriptor setup from dma_vecs
>
> Caution: This message originated from an External Source. Use proper caution when
> opening attachments, clicking links, or responding.
>
>
> The DMAEngine provides an interface for obtaining DMA transaction descriptors from
> an array of scatter gather buffers represented by struct dma_vec. This interface is
> used in the DMABUF API of the IIO framework [1].
> To enable DMABUF support through the IIO framework for the Xilinx DMA, implement
> callback .device_prep_peripheral_dma_vec() of struct dma_device in the driver.
>
> [1]: https://elixir.bootlin.com/linux/v6.16-rc1/source/drivers/iio/buffer/industrialio-buffer-
> dmaengine.c#L104
>
> Signed-off-by: Folker Schwesinger <dev@...ker-schwesinger.de>

Implementation same as xilinx_dma_prep_slave_sg(), looks fine to me.

Reviewed-by: Suraj Gupta <suraj.gupta2@....com>
>
> ---
> Changes in v2:
> - Improve commit message to include reasoning behind the change.
> - Rebase onto v6.16-rc1.
> - Link to v1:
> https://lore.kernel.org/dmaengine/D8TV2MP99NTE.1842MMA04VB9N@folker-
> schwesinger.de/
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 94 +++++++++++++++++++++++++++++++++
>  1 file changed, 94 insertions(+)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c index
> a34d8f0ceed8..fabff602065f 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -2172,6 +2172,99 @@ xilinx_cdma_prep_memcpy(struct dma_chan *dchan,
> dma_addr_t dma_dst,
>         return NULL;
>  }
>
> +/**
> + * xilinx_dma_prep_peripheral_dma_vec - prepare descriptors for a DMA_SLAVE
> + *     transaction from DMA vectors
> + * @dchan: DMA channel
> + * @vecs: Array of DMA vectors that should be transferred
> + * @nb: number of entries in @vecs
> + * @direction: DMA direction
> + * @flags: transfer ack flags
> + *
> + * Return: Async transaction descriptor on success and NULL on failure
> +*/ static struct dma_async_tx_descriptor
> +*xilinx_dma_prep_peripheral_dma_vec(
> +       struct dma_chan *dchan, const struct dma_vec *vecs, size_t nb,
> +       enum dma_transfer_direction direction, unsigned long flags) {
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_axidma_tx_segment *segment, *head, *prev = NULL;
> +       size_t copy;
> +       size_t sg_used;
> +       unsigned int i;
> +
> +       if (!is_slave_direction(direction) || direction != chan->direction)
> +               return NULL;
> +
> +       desc = xilinx_dma_alloc_tx_descriptor(chan);
> +       if (!desc)
> +               return NULL;
> +
> +       dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> +       desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> +
> +       /* Build transactions using information from DMA vectors */
> +       for (i = 0; i < nb; i++) {
> +               sg_used = 0;
> +
> +               /* Loop until the entire dma_vec entry is used */
> +               while (sg_used < vecs[i].len) {
> +                       struct xilinx_axidma_desc_hw *hw;
> +
> +                       /* Get a free segment */
> +                       segment = xilinx_axidma_alloc_tx_segment(chan);
> +                       if (!segment)
> +                               goto error;
> +
> +                       /*
> +                        * Calculate the maximum number of bytes to transfer,
> +                        * making sure it is less than the hw limit
> +                        */
> +                       copy = xilinx_dma_calc_copysize(chan, vecs[i].len,
> +                                       sg_used);
> +                       hw = &segment->hw;
> +
> +                       /* Fill in the descriptor */
> +                       xilinx_axidma_buf(chan, hw, vecs[i].addr, sg_used, 0);
> +                       hw->control = copy;
> +
> +                       if (prev)
> +                               prev->hw.next_desc = segment->phys;
> +
> +                       prev = segment;
> +                       sg_used += copy;
> +
> +                       /*
> +                        * Insert the segment into the descriptor segments
> +                        * list.
> +                        */
> +                       list_add_tail(&segment->node, &desc->segments);
> +               }
> +       }
> +
> +       head = list_first_entry(&desc->segments, struct xilinx_axidma_tx_segment,
> node);
> +       desc->async_tx.phys = head->phys;
> +
> +       /* For the last DMA_MEM_TO_DEV transfer, set EOP */
> +       if (chan->direction == DMA_MEM_TO_DEV) {
> +               segment->hw.control |= XILINX_DMA_BD_SOP;
> +               segment = list_last_entry(&desc->segments,
> +                                         struct xilinx_axidma_tx_segment,
> +                                         node);
> +               segment->hw.control |= XILINX_DMA_BD_EOP;
> +       }
> +
> +       if (chan->xdev->has_axistream_connected)
> +               desc->async_tx.metadata_ops = &xilinx_dma_metadata_ops;
> +
> +       return &desc->async_tx;
> +
> +error:
> +       xilinx_dma_free_tx_descriptor(chan, desc);
> +       return NULL;
> +}
> +
>  /**
>   * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
>   * @dchan: DMA channel
> @@ -3180,6 +3273,7 @@ static int xilinx_dma_probe(struct platform_device *pdev)
>         xdev->common.device_config = xilinx_dma_device_config;
>         if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
>                 dma_cap_set(DMA_CYCLIC, xdev->common.cap_mask);
> +               xdev->common.device_prep_peripheral_dma_vec =
> + xilinx_dma_prep_peripheral_dma_vec;
>                 xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
>                 xdev->common.device_prep_dma_cyclic =
>                                           xilinx_dma_prep_dma_cyclic;
>
> base-commit: 19272b37aa4f83ca52bdf9c16d5d81bdd1354494
> --
> 2.49.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ