[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e4f3ff81338dd738e1c6d81e255c129c07e9c7fb.camel@mailbox.org>
Date: Wed, 11 Feb 2026 13:44:56 +0100
From: Philipp Stanner <phasta@...lbox.org>
To: Alice Ryhl <aliceryhl@...gle.com>, phasta@...nel.org
Cc: Boris Brezillon <boris.brezillon@...labora.com>, David Airlie
<airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Danilo Krummrich
<dakr@...nel.org>, Gary Guo <gary@...yguo.net>, Benno Lossin
<lossin@...nel.org>, Christian König
<christian.koenig@....com>, Daniel Almeida <daniel.almeida@...labora.com>,
Joel Fernandes <joelagnelf@...dia.com>, linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org, rust-for-linux@...r.kernel.org
Subject: Re: [RFC PATCH 3/4] rust/drm: Add DRM Jobqueue
On Wed, 2026-02-11 at 12:22 +0000, Alice Ryhl wrote:
> On Wed, Feb 11, 2026 at 12:19:56PM +0100, Philipp Stanner wrote:
> > On Wed, 2026-02-11 at 12:07 +0100, Boris Brezillon wrote:
> > > On Wed, 11 Feb 2026 11:47:27 +0100
> > > Philipp Stanner <phasta@...lbox.org> wrote:
> > >
> > > > On Tue, 2026-02-10 at 15:57 +0100, Boris Brezillon wrote:
> > > > > On Tue, 3 Feb 2026 09:14:02 +0100
> > > > > Philipp Stanner <phasta@...nel.org> wrote:
> > > > >
> > > > > > +/// A jobqueue Job.
> > > > > > +///
> > > > > > +/// You can stuff your data in it. The job will be borrowed back to your driver
> > > > > > +/// once the time has come to run it.
> > > > > > +///
> > > > > > +/// Jobs are consumed by [`Jobqueue::submit_job`] by value (ownership transfer).
> > > > > > +/// You can set multiple [`DmaFence`] as dependencies for a job. It will only
> > > > > > +/// get run once all dependency fences have been signaled.
> > > > > > +///
> > > > > > +/// Jobs cost credits. Jobs will only be run if there are is enough capacity in
> > > > > > +/// the jobqueue for the job's credits. It is legal to specify jobs costing 0
> > > > > > +/// credits, effectively disabling that mechanism.
> > > > > > +#[pin_data]
> > > > > > +pub struct Job<T: 'static + Send> {
> > > > > > + cost: u32,
> > > > > > + #[pin]
> > > > > > + pub data: T,
> > > > > > + done_fence: Option<ARef<DmaFence<i32>>>,
> > > > > > + hardware_fence: Option<ARef<DmaFence<i32>>>,
> > > > > > + nr_of_deps: AtomicU32,
> > > > > > + dependencies: List<Dependency>,
> > > > >
> > > > > Given how tricky Lists are in rust, I'd recommend going for an XArray,
> > > > > like we have on the C side. There's a bit of overhead when the job only
> > > > > has a few deps, but I think simplicity beats memory-usage-optimizations
> > > > > in that case (especially since the overhead exists and is accepted in
> > > > > C).
> > > >
> > > > I mean, the list is now already implemented and works. Considering the
> > > > XArray would have made sense during the development difficulties.
> > >
> > > I'm sure it does, but that's still more code/tricks to maintain than
> > > what you'd have with the XArray abstraction.
> >
> > The solution than will rather be to make the linked list implementation
> > better.
> >
> > A list is the correct data structure in a huge number of use cases in
> > the kernel. We should not begin here to defer to other structures
> > because of convenience.
>
> Rust vs C aside, linked lists are often used in the kernel despite not
> being the best choice. They are extremely cache unfriendly and
> inefficient; most of the time a vector or xarray is far faster if you
> can accept an ENOMEM failure path when adding elements. I have heard
> several times from C maintainers that overuse of list is making the
> kernel slow in a death from a thousand cuts situation.
Interesting. Valid points.
It might be a self-accelerating thing. More people have lists on their
mind because they are so common, with RB trees et al. being relatively
rare, so they instinctively use them, making them more common…
>
> This applies to the red/black tree too, by the way.
Can't fully follow, you mean that RB trees are supposedly overused,
too?
P.
Powered by blists - more mailing lists