lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <62b82ffdd40d568d822bda8cdea83cd030851f68.camel@mailbox.org>
Date: Fri, 06 Feb 2026 10:32:38 +0100
From: Philipp Stanner <phasta@...lbox.org>
To: Gary Guo <gary@...yguo.net>, Boris Brezillon
 <boris.brezillon@...labora.com>,  Philipp Stanner <phasta@...nel.org>
Cc: David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, 
 Danilo Krummrich <dakr@...nel.org>, Alice Ryhl <aliceryhl@...gle.com>,
 Benno Lossin <lossin@...nel.org>, Christian König
 <christian.koenig@....com>, Daniel Almeida <daniel.almeida@...labora.com>, 
 Joel Fernandes <joelagnelf@...dia.com>, linux-kernel@...r.kernel.org,
 dri-devel@...ts.freedesktop.org,  rust-for-linux@...r.kernel.org
Subject: Re: [RFC PATCH 2/4] rust: sync: Add dma_fence abstractions

On Thu, 2026-02-05 at 13:16 +0000, Gary Guo wrote:
> On Thu Feb 5, 2026 at 10:16 AM GMT, Boris Brezillon wrote:
> > On Tue,  3 Feb 2026 09:14:01 +0100
> > Philipp Stanner <phasta@...nel.org> wrote:
> > 
> > > 

[…]

> > > +#[pin_data]
> > > +pub struct DmaFence<T> {
> > > +    /// The actual dma_fence passed to C.
> > > +    #[pin]
> > > +    inner: Opaque<bindings::dma_fence>,
> > > +    /// User data.
> > > +    #[pin]
> > > +    data: T,
> > 
> > A DmaFence is a cross-device synchronization mechanism that can (and
> > will)
> > 

I'm not questioning the truth behind this statement. They are designed
to do that. But is that actually being done, currently? I recently
found that the get_driver_name() callback intended to inform the
consumer of a fence about who actually issued the fence is only ever
used by i915.

Who actually uses that feature? Who needs fences from another driver?

Just out of curiousity


> >  cross the driver boundary (one driver can wait on a fence emitted
> > by a different driver). As such, I don't think embedding a generic T in
> > the DmaFence and considering it's the object being passed around is
> > going to work, because, how can one driver know the T chosen by the
> > driver that created the fence? If you want to have some fence emitter
> > data attached to the DmaFence allocation, you'll need two kind of
> > objects:
> > 
> > - one that's type agnostic and on which you can do the callback
> >   registration/unregistration, signalling checks, and generally all
> >   type-agnostic operations. That's basically just a wrapper around a
> >   bindings::dma_fence implementing AlwaysRefCounted.
> > - one that has the extra data and fctx, with a way to transmute from a
> >   generic fence to a implementer specific one in case the driver wants
> >   to do something special when waiting on its own fences (check done
> >   with the fence ops in C, I don't know how that translates in rust)
> 
> If `data` is moved to the end of struct and `DmaFence<T>` changed to
> `DmaFence<T: ?Sized>`, you would also gain the ability to coerce `DmaFence<T>`
> to `DmaFence<dyn Trait>`, e.g. `DmaFence<dyn Any>`.


I think we should go one step back here and question the general
design.

I only included data: T because it was among the early feedback that
this is how you do it in Rust.

I was never convinced that it's a good idea. Jobqueue doesn't need the
'data' field. Can anyone think of anyone who would need it?

What kind of data would be in there? It seems a driver would store its
equivalent of C's

struct my_fence {
   struct dma_fence f;
   /* other driver data */
}

which is then accessed in C with container_of.

But that data is only ever needed by that very driver.


My main point here is:
dma_fence's are a synchronization primitive very similar to
completions: informing about that something is done, executing every
registrants callbacks.

They are *not* a data transfer mechanism. It seems very wrong design-
wise to transfer generic data T from one driver to another. That's not
a fence's purpose. Another primitive should be used for that.

If another driver could touch / consume / see / use the emitter's data:
T, that would grossly decouple us from the original dma_fence design.
It would be akin to doing a container_of to consume foreign driver
data.

Like Xe  doing a

struct nouveau_fence *f = container_of(generic_fence, …);

Why would that ever be done? Seems totally broken.

So I strongly think that we'd either want to drop data: T, or we should
think about possibilities to hide it from other drivers.

I've got currently no idea how that could be addressed in Rust, though 

:)
:(


P.

> 
> Best,
> Gary
> 
> > 
> > > +    /// Marks whether the fence is currently in the signalling critical section.
> > > +    signalling: bool,
> > > +    /// A boolean needed for the C backend's lockdep guard.
> > > +    signalling_cookie: bool,
> > > +    /// A reference to the associated [`DmaFenceCtx`] so that it cannot be dropped while there are
> > > +    /// still fences around.
> > > +    fctx: Arc<DmaFenceCtx>,
> > > +}
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ