[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aYyhRThN3F76oiWt@google.com>
Date: Wed, 11 Feb 2026 15:33:25 +0000
From: Alice Ryhl <aliceryhl@...gle.com>
To: Gary Guo <gary@...yguo.net>
Cc: phasta@...nel.org, Boris Brezillon <boris.brezillon@...labora.com>,
David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Danilo Krummrich <dakr@...nel.org>,
Benno Lossin <lossin@...nel.org>,
"Christian König" <christian.koenig@....com>, Daniel Almeida <daniel.almeida@...labora.com>,
Joel Fernandes <joelagnelf@...dia.com>, linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org, rust-for-linux@...r.kernel.org
Subject: Re: [RFC PATCH 3/4] rust/drm: Add DRM Jobqueue
On Wed, Feb 11, 2026 at 09:45:37PM +0800, Gary Guo wrote:
> On Wed Feb 11, 2026 at 8:22 PM CST, Alice Ryhl wrote:
> > On Wed, Feb 11, 2026 at 12:19:56PM +0100, Philipp Stanner wrote:
> >> On Wed, 2026-02-11 at 12:07 +0100, Boris Brezillon wrote:
> >> > On Wed, 11 Feb 2026 11:47:27 +0100
> >> > Philipp Stanner <phasta@...lbox.org> wrote:
> >> >
> >> > > On Tue, 2026-02-10 at 15:57 +0100, Boris Brezillon wrote:
> >> > > > On Tue, 3 Feb 2026 09:14:02 +0100
> >> > > > Philipp Stanner <phasta@...nel.org> wrote:
> >> > > >
> >> > > > > +/// A jobqueue Job.
> >> > > > > +///
> >> > > > > +/// You can stuff your data in it. The job will be borrowed back to your driver
> >> > > > > +/// once the time has come to run it.
> >> > > > > +///
> >> > > > > +/// Jobs are consumed by [`Jobqueue::submit_job`] by value (ownership transfer).
> >> > > > > +/// You can set multiple [`DmaFence`] as dependencies for a job. It will only
> >> > > > > +/// get run once all dependency fences have been signaled.
> >> > > > > +///
> >> > > > > +/// Jobs cost credits. Jobs will only be run if there are is enough capacity in
> >> > > > > +/// the jobqueue for the job's credits. It is legal to specify jobs costing 0
> >> > > > > +/// credits, effectively disabling that mechanism.
> >> > > > > +#[pin_data]
> >> > > > > +pub struct Job<T: 'static + Send> {
> >> > > > > + cost: u32,
> >> > > > > + #[pin]
> >> > > > > + pub data: T,
> >> > > > > + done_fence: Option<ARef<DmaFence<i32>>>,
> >> > > > > + hardware_fence: Option<ARef<DmaFence<i32>>>,
> >> > > > > + nr_of_deps: AtomicU32,
> >> > > > > + dependencies: List<Dependency>,
> >> > > >
> >> > > > Given how tricky Lists are in rust, I'd recommend going for an XArray,
> >> > > > like we have on the C side. There's a bit of overhead when the job only
> >> > > > has a few deps, but I think simplicity beats memory-usage-optimizations
> >> > > > in that case (especially since the overhead exists and is accepted in
> >> > > > C).
> >> > >
> >> > > I mean, the list is now already implemented and works. Considering the
> >> > > XArray would have made sense during the development difficulties.
> >> >
> >> > I'm sure it does, but that's still more code/tricks to maintain than
> >> > what you'd have with the XArray abstraction.
> >>
> >> The solution than will rather be to make the linked list implementation
> >> better.
> >>
> >> A list is the correct data structure in a huge number of use cases in
> >> the kernel. We should not begin here to defer to other structures
> >> because of convenience.
> >
> > Rust vs C aside, linked lists are often used in the kernel despite not
> > being the best choice. They are extremely cache unfriendly and
> > inefficient; most of the time a vector or xarray is far faster if you
> > can accept an ENOMEM failure path when adding elements. I have heard
> > several times from C maintainers that overuse of list is making the
> > kernel slow in a death from a thousand cuts situation.
>
> I would rather argue the other way, other than very hot paths where cache
> friendliness absolutely matters, if you do not require indexing access then the
> list is the correct data strucutre more often than not.
>
> Vector have the issue where resizing requires moving, so it cannot be used with
> pinned types. XArray doesn't require moving because it requires an indirection
> and thus an extra allocation, but this means that if you're just iterating over
> all elements it also does not benefit from cache locality. Using vectors also
> require careful management of capacity, which is a very common source of memory
> leak for long running programs in user space Rust.
XArray does benefit somewhat from cache locality compared to a linked
list because you know the address of element i+1 even if you have not yet
retrieved element i, which may enable prefetching to happen.
Alice
> Re: the ENOMEM failure path, I'd argue that even if you *can* accept a ENOMEM
> failure path, it is better to not have a failing path that is unnecessary.
>
> Best,
> Gary
>
> >
> > This applies to the red/black tree too, by the way.
> >
> > Alice
>
Powered by blists - more mailing lists