lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1f4156e8c6c4da09fc5d72661d1e002ae6ee4f31.camel@gmail.com>
Date: Fri, 25 Oct 2024 15:24:58 +0200
From: Nuno Sá <noname.nuno@...il.com>
To: David Lechner <dlechner@...libre.com>, Mark Brown <broonie@...nel.org>, 
 Jonathan Cameron <jic23@...nel.org>, Rob Herring <robh@...nel.org>,
 Krzysztof Kozlowski <krzk+dt@...nel.org>, Conor Dooley
 <conor+dt@...nel.org>, Nuno Sá <nuno.sa@...log.com>, Uwe
 Kleine-König <ukleinek@...nel.org>
Cc: Michael Hennerich <Michael.Hennerich@...log.com>, Lars-Peter Clausen
	 <lars@...afoo.de>, David Jander <david@...tonic.nl>, Martin Sperl
	 <kernel@...tin.sperl.org>, linux-spi@...r.kernel.org, 
	devicetree@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-iio@...r.kernel.org, linux-pwm@...r.kernel.org
Subject: Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add
 devm_iio_dmaengine_buffer_setup_ext2()

I still need to look better at this but I do have one though already :)

On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
> Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
> cases where the DMA channel is managed by the caller rather than being
> requested and released by the iio_dmaengine module.
> 
> Signed-off-by: David Lechner <dlechner@...libre.com>
> ---
> 
> v4 changes:
> * This replaces "iio: buffer-dmaengine: generalize requesting DMA channel"
> ---
>  drivers/iio/buffer/industrialio-buffer-dmaengine.c | 107 +++++++++++++++------
>  include/linux/iio/buffer-dmaengine.h               |   5 +
>  2 files changed, 81 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> index 054af21dfa65..602cb2e147a6 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> @@ -33,6 +33,7 @@ struct dmaengine_buffer {
>  	struct iio_dma_buffer_queue queue;
>  
>  	struct dma_chan *chan;
> +	bool owns_chan;
>  	struct list_head active;
>  
>  	size_t align;
> @@ -216,28 +217,23 @@ static const struct iio_dev_attr
> *iio_dmaengine_buffer_attrs[] = {
>   * Once done using the buffer iio_dmaengine_buffer_free() should be used to
>   * release it.
>   */
> -static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev,
> -	const char *channel)
> +static struct iio_buffer *iio_dmaengine_buffer_alloc(struct dma_chan *chan,
> +						     bool owns_chan)
>  {
>  	struct dmaengine_buffer *dmaengine_buffer;
>  	unsigned int width, src_width, dest_width;
>  	struct dma_slave_caps caps;
> -	struct dma_chan *chan;
>  	int ret;
>  
>  	dmaengine_buffer = kzalloc(sizeof(*dmaengine_buffer), GFP_KERNEL);
> -	if (!dmaengine_buffer)
> -		return ERR_PTR(-ENOMEM);
> -
> -	chan = dma_request_chan(dev, channel);
> -	if (IS_ERR(chan)) {
> -		ret = PTR_ERR(chan);
> -		goto err_free;
> +	if (!dmaengine_buffer) {
> +		ret = -ENOMEM;
> +		goto err_release;
>  	}
>  
>  	ret = dma_get_slave_caps(chan, &caps);
>  	if (ret < 0)
> -		goto err_release;
> +		goto err_free;
>  
>  	/* Needs to be aligned to the maximum of the minimums */
>  	if (caps.src_addr_widths)
> @@ -252,6 +248,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct
> device *dev,
>  
>  	INIT_LIST_HEAD(&dmaengine_buffer->active);
>  	dmaengine_buffer->chan = chan;
> +	dmaengine_buffer->owns_chan = owns_chan;
>  	dmaengine_buffer->align = width;
>  	dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev);
>  
> @@ -263,10 +260,12 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct
> device *dev,
>  
>  	return &dmaengine_buffer->queue.buffer;
>  
> -err_release:
> -	dma_release_channel(chan);
>  err_free:
>  	kfree(dmaengine_buffer);
> +err_release:
> +	if (owns_chan)
> +		dma_release_channel(chan);
> +
>  	return ERR_PTR(ret);
>  }
>  
> @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct iio_buffer *buffer)
>  		iio_buffer_to_dmaengine_buffer(buffer);
>  
>  	iio_dma_buffer_exit(&dmaengine_buffer->queue);
> -	dma_release_channel(dmaengine_buffer->chan);
> -
>  	iio_buffer_put(buffer);
> +
> +	if (dmaengine_buffer->owns_chan)
> +		dma_release_channel(dmaengine_buffer->chan);

Not sure if I agree much with this owns_chan flag. The way I see it, we should always
handover the lifetime of the DMA channel to the IIO DMA framework. Note that even the
device you pass in for both requesting the channel of the spi_offload  and for
setting up the DMA buffer is the same (and i suspect it will always be) so I would
not go with the trouble. And with this assumption we could simplify a bit more the
spi implementation.

And not even related but I even suspect the current implementation could be
problematic. Basically I'm suspecting that the lifetime of the DMA channel should be
attached to the lifetime of the iio_buffer. IOW, we should only release the channel
in iio_dmaengine_buffer_release() - in which case the current implementation with the
spi_offload would also be buggy.

But bah, the second point is completely theoretical and likely very hard to reproduce
in real life (if reproducible at all - for now it's only something I suspect)

- Nuno Sá 



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ