lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260206120442.51c5ca75@fedora>
Date: Fri, 6 Feb 2026 12:04:42 +0100
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Philipp Stanner <phasta@...lbox.org>
Cc: phasta@...nel.org, Gary Guo <gary@...yguo.net>, David Airlie
 <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Danilo Krummrich
 <dakr@...nel.org>, Alice Ryhl <aliceryhl@...gle.com>, Benno Lossin
 <lossin@...nel.org>, Christian König
 <christian.koenig@....com>, Daniel Almeida <daniel.almeida@...labora.com>,
 Joel Fernandes <joelagnelf@...dia.com>, linux-kernel@...r.kernel.org,
 dri-devel@...ts.freedesktop.org, rust-for-linux@...r.kernel.org
Subject: Re: [RFC PATCH 2/4] rust: sync: Add dma_fence abstractions

On Fri, 06 Feb 2026 10:32:38 +0100
Philipp Stanner <phasta@...lbox.org> wrote:

> On Thu, 2026-02-05 at 13:16 +0000, Gary Guo wrote:
> > On Thu Feb 5, 2026 at 10:16 AM GMT, Boris Brezillon wrote:  
> > > On Tue,  3 Feb 2026 09:14:01 +0100
> > > Philipp Stanner <phasta@...nel.org> wrote:
> > >   
> > > >   
> 
> […]
> 
> > > > +#[pin_data]
> > > > +pub struct DmaFence<T> {
> > > > +    /// The actual dma_fence passed to C.
> > > > +    #[pin]
> > > > +    inner: Opaque<bindings::dma_fence>,
> > > > +    /// User data.
> > > > +    #[pin]
> > > > +    data: T,  
> > > 
> > > A DmaFence is a cross-device synchronization mechanism that can (and
> > > will)
> > >   
> 
> I'm not questioning the truth behind this statement. They are designed
> to do that. But is that actually being done, currently? I recently
> found that the get_driver_name() callback intended to inform the
> consumer of a fence about who actually issued the fence is only ever
> used by i915.
> 
> Who actually uses that feature? Who needs fences from another driver?

Display controller (AKA KMS) drivers waiting on fences emitted by a GPU
driver, for instance.

> 
> Just out of curiousity
> 
> 
> > >  cross the driver boundary (one driver can wait on a fence emitted
> > > by a different driver). As such, I don't think embedding a generic T in
> > > the DmaFence and considering it's the object being passed around is
> > > going to work, because, how can one driver know the T chosen by the
> > > driver that created the fence? If you want to have some fence emitter
> > > data attached to the DmaFence allocation, you'll need two kind of
> > > objects:
> > > 
> > > - one that's type agnostic and on which you can do the callback
> > >   registration/unregistration, signalling checks, and generally all
> > >   type-agnostic operations. That's basically just a wrapper around a
> > >   bindings::dma_fence implementing AlwaysRefCounted.
> > > - one that has the extra data and fctx, with a way to transmute from a
> > >   generic fence to a implementer specific one in case the driver wants
> > >   to do something special when waiting on its own fences (check done
> > >   with the fence ops in C, I don't know how that translates in rust)  
> > 
> > If `data` is moved to the end of struct and `DmaFence<T>` changed to
> > `DmaFence<T: ?Sized>`, you would also gain the ability to coerce `DmaFence<T>`
> > to `DmaFence<dyn Trait>`, e.g. `DmaFence<dyn Any>`.  
> 
> 
> I think we should go one step back here and question the general
> design.
> 
> I only included data: T because it was among the early feedback that
> this is how you do it in Rust.
> 
> I was never convinced that it's a good idea. Jobqueue doesn't need the
> 'data' field. Can anyone think of anyone who would need it?
> 
> What kind of data would be in there? It seems a driver would store its
> equivalent of C's
> 
> struct my_fence {
>    struct dma_fence f;
>    /* other driver data */
> }
> 
> which is then accessed in C with container_of.
> 
> But that data is only ever needed by that very driver.
> 
> 
> My main point here is:
> dma_fence's are a synchronization primitive very similar to
> completions: informing about that something is done, executing every
> registrants callbacks.
> 
> They are *not* a data transfer mechanism. It seems very wrong design-
> wise to transfer generic data T from one driver to another. That's not
> a fence's purpose. Another primitive should be used for that.
> 
> If another driver could touch / consume / see / use the emitter's data:
> T, that would grossly decouple us from the original dma_fence design.
> It would be akin to doing a container_of to consume foreign driver
> data.
> 
> Like Xe  doing a
> 
> struct nouveau_fence *f = container_of(generic_fence, …);
> 
> Why would that ever be done? Seems totally broken.
> 
> So I strongly think that we'd either want to drop data: T, or we should
> think about possibilities to hide it from other drivers.
> 
> I've got currently no idea how that could be addressed in Rust, though 

So, as Danilo explained in his reply, there's two kind of users:

1. those that want to wait on fences (that'd be the JobQueue, for
   instance)
2. those that are emitting fences (AKA those implementing the fence_ops
   in C)

And each of them should be given different access to the underlying
dma_fence, hence the proposal to have different objects to back
those concepts.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ