lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc52cda0-47d9-4cbf-a68e-0af304edc32e@gmail.com>
Date: Fri, 25 Oct 2024 20:40:42 +0200 (GMT+02:00)
From: Nuno Sá <noname.nuno@...il.com>
To: David Lechner <dlechner@...libre.com>
Cc: Mark Brown <broonie@...nel.org>, Jonathan Cameron <jic23@...nel.org>,
	Rob Herring <robh@...nel.org>,
	Krzysztof Kozlowski <krzk+dt@...nel.org>,
	Conor Dooley <conor+dt@...nel.org>,
	Nuno Sá <nuno.sa@...log.com>,
	Uwe Kleine-König <ukleinek@...nel.org>,
	Michael Hennerich <Michael.Hennerich@...log.com>,
	Lars-Peter Clausen <lars@...afoo.de>,
	David Jander <david@...tonic.nl>,
	Martin Sperl <kernel@...tin.sperl.org>, linux-spi@...r.kernel.org,
	devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-iio@...r.kernel.org, linux-pwm@...r.kernel.org
Subject: Re: [PATCH RFC v4 11/15] iio: buffer-dmaengine: add
 devm_iio_dmaengine_buffer_setup_ext2()

Oct 25, 2024 18:42:02 David Lechner <dlechner@...libre.com>:

> On 10/25/24 8:24 AM, Nuno Sá wrote:
>> I still need to look better at this but I do have one though already 
>> :)
>>
>> On Wed, 2024-10-23 at 15:59 -0500, David Lechner wrote:
>>> Add a new devm_iio_dmaengine_buffer_setup_ext2() function to handle
>>> cases where the DMA channel is managed by the caller rather than 
>>> being
>>> requested and released by the iio_dmaengine module.
>>>
>>> Signed-off-by: David Lechner <dlechner@...libre.com>
>>> ---
>>>
>>> v4 changes:
>>> * This replaces "iio: buffer-dmaengine: generalize requesting DMA 
>>> channel"
>>> ---
>
> ...
>
>>> @@ -282,12 +281,38 @@ void iio_dmaengine_buffer_free(struct 
>>> iio_buffer *buffer)
>>>         iio_buffer_to_dmaengine_buffer(buffer);
>>>  
>>>     iio_dma_buffer_exit(&dmaengine_buffer->queue);
>>> -   dma_release_channel(dmaengine_buffer->chan);
>>> -
>>>     iio_buffer_put(buffer);
>>> +
>>> +   if (dmaengine_buffer->owns_chan)
>>> +       dma_release_channel(dmaengine_buffer->chan);
>>
>> Not sure if I agree much with this owns_chan flag. The way I see it, 
>> we should always
>> handover the lifetime of the DMA channel to the IIO DMA framework. 
>> Note that even the
>> device you pass in for both requesting the channel of the spi_offload  
>> and for
>> setting up the DMA buffer is the same (and i suspect it will always 
>> be) so I would
>> not go with the trouble. And with this assumption we could simplify a 
>> bit more the
>> spi implementation.
>
> I tried something like this in v3 but Jonathan didn't seem to like it.
>
> https://lore.kernel.org/all/20240727144303.4a8604cb@jic23-huawei/
>
>>
>> And not even related but I even suspect the current implementation 
>> could be
>> problematic. Basically I'm suspecting that the lifetime of the DMA 
>> channel should be
>> attached to the lifetime of the iio_buffer. IOW, we should only 
>> release the channel
>> in iio_dmaengine_buffer_release() - in which case the current 
>> implementation with the
>> spi_offload would also be buggy.
>
> The buffer can outlive the iio device driver that created the buffer?

Yes, it can as the IIO device itself. In case a userspace app has an open 
FD for the buffer chardev, we get a reference that is only released when 
the FD is closed (which can outlive the device behind bound to its 
driver). That is why we nullify indio_dev->info and check for it on the 
read() and write() fops.

FWIW, I raised concerns about this in the past (as we don't have any lock 
in those paths) but Jonathan rightfully wanted to see a real race. And I 
was too lazy to try and reproduce one but I'm still fairly sure we have 
theoretical (at least) races in those paths. And one of them could be (I 
think) concurrently hitting a DMA submit block while the device is being 
unbound. In that case the DMA chan would be already released and we could 
still try to initiate a transfer. I did not check if that would crash or 
something but it should still not happen.

- Nuno Sá

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ