[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zmh7cFgKSamZmT4c@matsya>
Date: Tue, 11 Jun 2024 21:59:36 +0530
From: Vinod Koul <vkoul@...nel.org>
To: Paul Cercueil <paul@...pouillou.net>
Cc: Jonathan Cameron <jic23@...nel.org>,
Lars-Peter Clausen <lars@...afoo.de>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
Jonathan Corbet <corbet@....net>, Nuno Sa <nuno.sa@...log.com>,
linux-iio@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, dmaengine@...r.kernel.org,
linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linaro-mm-sig@...ts.linaro.org
Subject: Re: [PATCH v10 1/6] dmaengine: Add API function
dmaengine_prep_peripheral_dma_vec()
On 05-06-24, 13:08, Paul Cercueil wrote:
> This function can be used to initiate a scatter-gather DMA transfer,
> where the address and size of each segment is located in one entry of
> the dma_vec array.
>
> The major difference with dmaengine_prep_slave_sg() is that it supports
> specifying the lengths of each DMA transfer; as trying to override the
> length of the transfer with dmaengine_prep_slave_sg() is a very tedious
> process. The introduction of a new API function is also justified by the
> fact that scatterlists are on their way out.
>
> Note that dmaengine_prep_interleaved_dma() is not helpful either in that
> case, as it assumes that the address of each segment will be higher than
> the one of the previous segment, which we just cannot guarantee in case
> of a scatter-gather transfer.
This looks good to me, but is missing Documentation changes for this
API, pls add that
>
> Signed-off-by: Paul Cercueil <paul@...pouillou.net>
> Signed-off-by: Nuno Sa <nuno.sa@...log.com>
>
> ---
> v3: New patch
>
> v5: Replace with function dmaengine_prep_slave_dma_vec(), and struct
> 'dma_vec'.
> Note that at some point we will need to support cyclic transfers
> using dmaengine_prep_slave_dma_vec(). Maybe with a new "flags"
> parameter to the function?
>
> v7:
> - Renamed *device_prep_slave_dma_vec() -> device_prep_peripheral_dma_vec();
> - Added a new flag parameter to the function as agreed between Paul
> and Vinod. I renamed the first parameter to prep_flags as it's supposed to
> be used (I think) with enum dma_ctrl_flags. I'm not really sure how that API
> can grow but I was thinking in just having a bool cyclic parameter (as the
> first intention of the flags is to support cyclic transfers) but ended up
> "respecting" the previously agreed approach.
>
> v10:
> - Add kernel doc to dmaengine_prep_peripheral_dma_vec()
> - Remove extra flags parameter
> ---
> include/linux/dmaengine.h | 33 +++++++++++++++++++++++++++++++++
> 1 file changed, 33 insertions(+)
>
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 752dbde4cec1..9fc03068cabc 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -160,6 +160,16 @@ struct dma_interleaved_template {
> struct data_chunk sgl[];
> };
>
> +/**
> + * struct dma_vec - DMA vector
> + * @addr: Bus address of the start of the vector
> + * @len: Length in bytes of the DMA vector
> + */
> +struct dma_vec {
> + dma_addr_t addr;
> + size_t len;
> +};
> +
> /**
> * enum dma_ctrl_flags - DMA flags to augment operation preparation,
> * control completion, and communicate status.
> @@ -910,6 +920,10 @@ struct dma_device {
> struct dma_async_tx_descriptor *(*device_prep_dma_interrupt)(
> struct dma_chan *chan, unsigned long flags);
>
> + struct dma_async_tx_descriptor *(*device_prep_peripheral_dma_vec)(
> + struct dma_chan *chan, const struct dma_vec *vecs,
> + size_t nents, enum dma_transfer_direction direction,
> + unsigned long flags);
> struct dma_async_tx_descriptor *(*device_prep_slave_sg)(
> struct dma_chan *chan, struct scatterlist *sgl,
> unsigned int sg_len, enum dma_transfer_direction direction,
> @@ -973,6 +987,25 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_single(
> dir, flags, NULL);
> }
>
> +/**
> + * dmaengine_prep_peripheral_dma_vec() - Prepare a DMA scatter-gather descriptor
> + * @chan: The channel to be used for this descriptor
> + * @vecs: The array of DMA vectors that should be transferred
> + * @nents: The number of DMA vectors in the array
> + * @dir: Specifies the direction of the data transfer
> + * @flags: DMA engine flags
> + */
> +static inline struct dma_async_tx_descriptor *dmaengine_prep_peripheral_dma_vec(
> + struct dma_chan *chan, const struct dma_vec *vecs, size_t nents,
> + enum dma_transfer_direction dir, unsigned long flags)
> +{
> + if (!chan || !chan->device || !chan->device->device_prep_peripheral_dma_vec)
> + return NULL;
> +
> + return chan->device->device_prep_peripheral_dma_vec(chan, vecs, nents,
> + dir, flags);
> +}
> +
> static inline struct dma_async_tx_descriptor *dmaengine_prep_slave_sg(
> struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len,
> enum dma_transfer_direction dir, unsigned long flags)
> --
> 2.43.0
--
~Vinod
Powered by blists - more mailing lists