lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260211150738.049af4bb@fedora>
Date: Wed, 11 Feb 2026 15:07:38 +0100
From: Boris Brezillon <boris.brezillon@...labora.com>
To: "Gary Guo" <gary@...yguo.net>
Cc: "Alice Ryhl" <aliceryhl@...gle.com>, <phasta@...nel.org>, "David Airlie"
 <airlied@...il.com>, "Simona Vetter" <simona@...ll.ch>, "Danilo Krummrich"
 <dakr@...nel.org>, "Benno Lossin" <lossin@...nel.org>, Christian
 König <christian.koenig@....com>, "Daniel Almeida"
 <daniel.almeida@...labora.com>, "Joel Fernandes" <joelagnelf@...dia.com>,
 <linux-kernel@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>,
 <rust-for-linux@...r.kernel.org>
Subject: Re: [RFC PATCH 3/4] rust/drm: Add DRM Jobqueue

On Wed, 11 Feb 2026 21:45:37 +0800
"Gary Guo" <gary@...yguo.net> wrote:

> On Wed Feb 11, 2026 at 8:22 PM CST, Alice Ryhl wrote:
> > On Wed, Feb 11, 2026 at 12:19:56PM +0100, Philipp Stanner wrote:  
> >> On Wed, 2026-02-11 at 12:07 +0100, Boris Brezillon wrote:  
> >> > On Wed, 11 Feb 2026 11:47:27 +0100
> >> > Philipp Stanner <phasta@...lbox.org> wrote:
> >> >   
> >> > > On Tue, 2026-02-10 at 15:57 +0100, Boris Brezillon wrote:  
> >> > > > On Tue,  3 Feb 2026 09:14:02 +0100
> >> > > > Philipp Stanner <phasta@...nel.org> wrote:
> >> > > >     
> >> > > > > +/// A jobqueue Job.
> >> > > > > +///
> >> > > > > +/// You can stuff your data in it. The job will be borrowed back to your driver
> >> > > > > +/// once the time has come to run it.
> >> > > > > +///
> >> > > > > +/// Jobs are consumed by [`Jobqueue::submit_job`] by value (ownership transfer).
> >> > > > > +/// You can set multiple [`DmaFence`] as dependencies for a job. It will only
> >> > > > > +/// get run once all dependency fences have been signaled.
> >> > > > > +///
> >> > > > > +/// Jobs cost credits. Jobs will only be run if there are is enough capacity in
> >> > > > > +/// the jobqueue for the job's credits. It is legal to specify jobs costing 0
> >> > > > > +/// credits, effectively disabling that mechanism.
> >> > > > > +#[pin_data]
> >> > > > > +pub struct Job<T: 'static + Send> {
> >> > > > > +    cost: u32,
> >> > > > > +    #[pin]
> >> > > > > +    pub data: T,
> >> > > > > +    done_fence: Option<ARef<DmaFence<i32>>>,
> >> > > > > +    hardware_fence: Option<ARef<DmaFence<i32>>>,
> >> > > > > +    nr_of_deps: AtomicU32,
> >> > > > > +    dependencies: List<Dependency>,    
> >> > > > 
> >> > > > Given how tricky Lists are in rust, I'd recommend going for an XArray,
> >> > > > like we have on the C side. There's a bit of overhead when the job only
> >> > > > has a few deps, but I think simplicity beats memory-usage-optimizations
> >> > > > in that case (especially since the overhead exists and is accepted in
> >> > > > C).    
> >> > > 
> >> > > I mean, the list is now already implemented and works. Considering the
> >> > > XArray would have made sense during the development difficulties.  
> >> > 
> >> > I'm sure it does, but that's still more code/tricks to maintain than
> >> > what you'd have with the XArray abstraction.  
> >> 
> >> The solution than will rather be to make the linked list implementation
> >> better.
> >> 
> >> A list is the correct data structure in a huge number of use cases in
> >> the kernel. We should not begin here to defer to other structures
> >> because of convenience.  
> >
> > Rust vs C aside, linked lists are often used in the kernel despite not
> > being the best choice. They are extremely cache unfriendly and
> > inefficient; most of the time a vector or xarray is far faster if you
> > can accept an ENOMEM failure path when adding elements. I have heard
> > several times from C maintainers that overuse of list is making the
> > kernel slow in a death from a thousand cuts situation.  
> 
> I would rather argue the other way, other than very hot paths where cache
> friendliness absolutely matters, if you do not require indexing access then the
> list is the correct data strucutre more often than not.
> 
> Vector have the issue where resizing requires moving, so it cannot be used with
> pinned types. XArray doesn't require moving because it requires an indirection
> and thus an extra allocation, but this means that if you're just iterating over
> all elements it also does not benefit from cache locality.

Back to this particular job dependencies use case: we have to embed the
DmaFence pointer in some wrapper with the ListLinks element anyway,
because DmaFences can be inserted in multiple of those lists in
parallel. This means that now the overhead is two-pointers per DmaFence
pointer. Of course, it's not a big issue in practice, because those
elements are short-lived, it's only 16 bytes, and if we're ending up
having too many of those deps, we're gonna have other challenging
scaling issues anyway. But it also means we have the extra-indirection
that you'd have with an array of pointers or an xarray, with more
per-item overhead, and none of the advantages a list could provide (O(1)
removal if you have the list item, O(1) front insertion, ...) would
really be used in this case (because we use the list as a FIFO, really).

So overall, I'd still lean towards an XArray here, unless there are
strong objections. Just to make it super clear, I'm not making a case
against all List usage, just this particular one :-).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ