lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4d49ab26-3ee7-4409-aff6-c96ec1754e88@amd.com>
Date: Tue, 10 Feb 2026 10:26:04 +0100
From: Christian König <christian.koenig@....com>
To: Alice Ryhl <aliceryhl@...gle.com>
Cc: Boris Brezillon <boris.brezillon@...labora.com>,
 Philipp Stanner <phasta@...lbox.org>, phasta@...nel.org,
 Danilo Krummrich <dakr@...nel.org>, David Airlie <airlied@...il.com>,
 Simona Vetter <simona@...ll.ch>, Gary Guo <gary@...yguo.net>,
 Benno Lossin <lossin@...nel.org>,
 Daniel Almeida <daniel.almeida@...labora.com>,
 Joel Fernandes <joelagnelf@...dia.com>, linux-kernel@...r.kernel.org,
 dri-devel@...ts.freedesktop.org, rust-for-linux@...r.kernel.org
Subject: Re: [RFC PATCH 2/4] rust: sync: Add dma_fence abstractions

On 2/10/26 09:38, Alice Ryhl wrote:
> On Tue, Feb 10, 2026 at 09:16:34AM +0100, Christian König wrote:
>> On 2/9/26 15:58, Boris Brezillon wrote:
>>> On Mon, 09 Feb 2026 09:19:46 +0100
>>> Philipp Stanner <phasta@...lbox.org> wrote:
>>>
>>>> On Fri, 2026-02-06 at 11:23 +0100, Danilo Krummrich wrote:
>>>>> On Thu Feb 5, 2026 at 9:57 AM CET, Boris Brezillon wrote:  
>>>>>> On Tue,  3 Feb 2026 09:14:01 +0100
>>>>>> Philipp Stanner <phasta@...nel.org> wrote:
>>>>>> Unfortunately, I don't know how to translate that in rust, but we
>>>>>> need a way to check if any path code path does a DmaFence.signal(),
>>>>>> go back to the entry point (for a WorkItem, that would be
>>>>>> WorkItem::run() for instance), and make it a DmaFenceSignallingPath.
>>>>>> Not only that, but we need to know all the deps that make it so
>>>>>> this path can be called (if I take the WorkItem example, that would
>>>>>> be the path that leads to the WorkItem being scheduled).  
>>>>>
>>>>> I think we need a guard object for this that is not Send, just like for any
>>>>> other lock.
>>>>>
>>>>> Internally, those markers rely on lockdep, i.e. they just acquire and release a
>>>>> "fake" lock.  
>>>>
>>>> The guard object would be created through fence.begin_signalling(), wouldn't it?
>>>
>>> It shouldn't be a (&self)-method, because at the start of a DMA
>>> signaling path, you don't necessarily know which fence you're going to
>>> signal (you might actually signal several of them).
>>>
>>>> And when it drops you call dma_fence_end_signalling()?
>>>
>>> Yep, dma_fence_end_signalling() should be called when the guard is
>>> dropped.
>>>
>>>>
>>>> How would that ensure that the driver actually marks the signalling region correctly?
>>>
>>> Nothing, and that's a problem we have in C: you have no way of telling
>>> which code section is going to be a DMA-signaling path. I can't think
>>> of any way to make that safer in rust, unfortunately. The best I can
>>> think of would be to
>>>
>>> - Have a special DmaFenceSignalWorkItem (wrapper a WorkItem with extra
>>>   constraints) that's designed for DMA-fence signaling, and that takes
>>>   the DmaSignaling guard around the ::run() call.
>>> - We would then need to ensure that any code path scheduling this work
>>>   item is also in a DMA-signaling path by taking a ref to the
>>>   DmaSignalingGuard. This of course doesn't guarantee that the section
>>>   is wide enough to prevent any non-authorized operations in any path
>>>   leading to this WorkItem scheduling, but it would at least force the
>>>   caller to consider the problem.
>>
>> On the C side I have a patch set which does something very similar.
>>
>> It's basically a WARN_ON_ONCE() which triggers as soon as you try to
>> signal a DMA fence from an IOCTL, or more specific process context.
>>
>> Signaling a DMA fence from interrupt context, a work item or kernel
>> thread is still allowed, there is just the hole that you can schedule
>> a work item from process context as well.
>>
>> The major problem with that patch set is that we have tons of very
>> hacky signaling paths in drivers already because we initially didn't
>> knew how much trouble getting this wrong causes.
>>
>> I'm strongly in favor of getting this right for the rust side from the
>> beginning and enforcing strict rules for every code trying to
>> implement a DMA fence.
> 
> Hmm. Could you say a bit more about what the rules are? I just re-read
> the comments in dma-fence.c, but I have some questions.

Oh that is tricky, we have tried to explain and document that numerous times.

For a good start see that here: https://kernel.org/doc/html/v5.9/driver-api/dma-buf.html#indefinite-dma-fences

There was also a really good talk on LPC 2021 from Faith on that topic.

Unfortunately the understanding usually only comes when things start to fail at the customer and people start to realize that their design approach doesn't work and by then it's already UAPI....

> First, how does the signalling annotation work when the signalling path
> crosses thread boundaries?

It doesn't. The annotation done by Sima is using lockdep in a rather unusual way and we would need to extend lockdep a bit for that.

> For example, let's say I call an ioctl to
> perform an async VM_BIND, then the dma fence signalling critical path
> starts in the ioctl, but then it moves into a workqueue and finishes
> there, right?

Perfectly correct, yes.

And in all those code paths you can't allocate memory, nor take locks under which memory is allocated or more generally waited in any way for the fence to signal.

> Second, it looks like we have the same challenge as with irq locks where
> you must properly nest dma_fence_begin_signalling() regions, and can't
> e.g. do this:
> 
> c1 = dma_fence_begin_signalling()
> c2 = dma_fence_begin_signalling()
> dma_fence_end_signalling(c1)
> dma_fence_end_signalling(c2)

Oh, good point as well! Lockdep will indeed start to scream here while that is perfectly valid.

Regards,
Christian.

> 
> Alice


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ