[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231221161258.056f5ce4@jic23-huawei>
Date: Thu, 21 Dec 2023 16:12:58 +0000
From: Jonathan Cameron <jic23@...nel.org>
To: Paul Cercueil <paul@...pouillou.net>
Cc: Lars-Peter Clausen <lars@...afoo.de>, Sumit Semwal
<sumit.semwal@...aro.org>, Christian König
<christian.koenig@....com>, Vinod Koul <vkoul@...nel.org>, Jonathan Corbet
<corbet@....net>, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
dmaengine@...r.kernel.org, linux-iio@...r.kernel.org,
linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org,
linaro-mm-sig@...ts.linaro.org, Nuno Sá
<noname.nuno@...il.com>, Michael Hennerich <Michael.Hennerich@...log.com>
Subject: Re: [PATCH v5 7/8] iio: buffer-dmaengine: Support new DMABUF based
userspace API
On Tue, 19 Dec 2023 18:50:08 +0100
Paul Cercueil <paul@...pouillou.net> wrote:
> Use the functions provided by the buffer-dma core to implement the
> DMABUF userspace API in the buffer-dmaengine IIO buffer implementation.
>
> Since we want to be able to transfer an arbitrary number of bytes and
> not necesarily the full DMABUF, the associated scatterlist is converted
> to an array of DMA addresses + lengths, which is then passed to
> dmaengine_prep_slave_dma_array().
>
> Signed-off-by: Paul Cercueil <paul@...pouillou.net>
One question inline. Otherwise looks fine to me.
J
>
> ---
> v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the code to
> work with the new functions introduced in industrialio-buffer-dma.c.
>
> v5: - Use the new dmaengine_prep_slave_dma_vec().
> - Restrict to input buffers, since output buffers are not yet
> supported by IIO buffers.
> ---
> .../buffer/industrialio-buffer-dmaengine.c | 52 ++++++++++++++++---
> 1 file changed, 46 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> index 5f85ba38e6f6..825d76a24a67 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> @@ -64,15 +64,51 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue,
> struct dmaengine_buffer *dmaengine_buffer =
> iio_buffer_to_dmaengine_buffer(&queue->buffer);
> struct dma_async_tx_descriptor *desc;
> + unsigned int i, nents;
> + struct scatterlist *sgl;
> + struct dma_vec *vecs;
> + size_t max_size;
> dma_cookie_t cookie;
> + size_t len_total;
>
> - block->bytes_used = min(block->size, dmaengine_buffer->max_size);
> - block->bytes_used = round_down(block->bytes_used,
> - dmaengine_buffer->align);
> + if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) {
> + /* We do not yet support output buffers. */
> + return -EINVAL;
> + }
>
> - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> - block->phys_addr, block->bytes_used, DMA_DEV_TO_MEM,
> - DMA_PREP_INTERRUPT);
> + if (block->sg_table) {
> + sgl = block->sg_table->sgl;
> + nents = sg_nents_for_len(sgl, block->bytes_used);
Are we guaranteed the length in the sglist is enough? If not this
can return an error code.
> +
> + vecs = kmalloc_array(nents, sizeof(*vecs), GFP_KERNEL);
> + if (!vecs)
> + return -ENOMEM;
> +
> + len_total = block->bytes_used;
> +
> + for (i = 0; i < nents; i++) {
> + vecs[i].addr = sg_dma_address(sgl);
> + vecs[i].len = min(sg_dma_len(sgl), len_total);
> + len_total -= vecs[i].len;
> +
> + sgl = sg_next(sgl);
> + }
> +
> + desc = dmaengine_prep_slave_dma_vec(dmaengine_buffer->chan,
> + vecs, nents, DMA_DEV_TO_MEM,
> + DMA_PREP_INTERRUPT);
> + kfree(vecs);
> + } else {
> + max_size = min(block->size, dmaengine_buffer->max_size);
> + max_size = round_down(max_size, dmaengine_buffer->align);
> + block->bytes_used = max_size;
> +
> + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> + block->phys_addr,
> + block->bytes_used,
> + DMA_DEV_TO_MEM,
> + DMA_PREP_INTERRUPT);
> + }
> if (!desc)
> return -ENOMEM;
>
Powered by blists - more mailing lists