[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250806085702.5bf600a3@nimda.home>
Date: Wed, 6 Aug 2025 08:57:02 +0300
From: Onur Özkan <work@...rozkan.dev>
To: Lyude Paul <thatslyude@...il.com>
Cc: linux-kernel@...r.kernel.org, rust-for-linux@...r.kernel.org,
ojeda@...nel.org, alex.gaynor@...il.com, boqun.feng@...il.com,
gary@...yguo.net, lossin@...nel.org, a.hindborg@...nel.org,
aliceryhl@...gle.com, tmgross@...ch.edu, dakr@...nel.org,
peterz@...radead.org, mingo@...hat.com, will@...nel.org,
longman@...hat.com, felipe_life@...e.com, daniel@...lak.dev,
bjorn3_gh@...tonmail.com, lyude@...hat.com
Subject: Re: [PATCH v5 0/3] rust: add `ww_mutex` support
On Tue, 05 Aug 2025 12:22:33 -0400
Lyude Paul <thatslyude@...il.com> wrote:
> Hey! Onur, if you could make sure that future emails get sent to
>
> lyude at redhat dot com
>
> That would be appreciated! I usually am paying much closer attention
> to that email address. That being said, some comments down below:
Sure thing, added it the cc list.
> On Thu, 2025-07-24 at 16:53 +0300, Onur Özkan wrote:
> > Hi again,
> >
> > Just finished going over the C-side use of `ww_mutex` today and I
> > wanted to share some notes and thoughts based on that.
> >
> > To get the full context, you might want to take a look at this
> > thread [1].
> >
> > - The first note I took is that we shouldn't allow locking without
> > `WwAcquireCtx` (which is currently possible in v5). As explained in
> > ww_mutex documentation [2], this basically turns it into a regular
> > mutex and you don't get benefits of `ww_mutex`.
>
> I disagree about this conclusion actually, occasionally you do just
> need to acquire a single mutex and not multiple. Actually - we even
> have a drm_modeset_lock_single_*() set of functions in KMS
> specifically for this purpose.
>
> Here's an example where we use it in nouveau for inspecting the
> atomic display state of a specific crtc:
>
> https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/nouveau/dispnv50/crc.c#L682
>
> This isn't actually too unusual of a usecase tbh, especially
> considering that the real reason we have ww_mutexes in KMS is to deal
> with the atomic transaction model that's used for modesetting in the
> kernel.
>
> A good example, which also doubles as a ww_mutex example you likely
> missed on your first skim since all of it is done through the
> drm_modeset_lock API and not ww_mutex directly:
>
> https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/nouveau/dispnv50/crc.c#L544
>
> drm_modeset_acquire_init() is a wrapper around ww_mutex_init() which
> actually does pretty much exactly what Daniel is describing lower in
> the thread: keeping track of a list of each acquired lock so that
> they can be dropped once the context is released.
>
> drm_atomic_get_crtc_state() grabs the CRTC context and ensures that
> the crtc's modeset lock (e.g. a ww_mutex) is actually acquired
>
> drm_atomic_commit() performs the checking of the atomic modeset
> transaction, e.g. going through the requested display settings and
> ensuring that the display hardware is actually capable of supporting
> it before allowing the modeset to continue. Often times for GPU
> drivers this process can involve not just checking limitations on the
> modesetting object in question, but potentially adding other
> modesetting objects into the transaction that the driver needs to
> also inspect the state of. Adding any of these modesetting objects
> potentially means having to acquire their modeset locks using the
> same context, and we can't and don't really want to force users to
> have an idea of exactly how many locks can ever be acquired. Display
> hardware is wonderful at coming up with very wacky limitations we
> can't really know ahead of time because they can even depend on the
> global display state.
>
> So tracking locks is definitely the way to go, but we should keep in
> mind there's already infrastructure in the kernel doing this that we
> want to be able to handle with these APIs as well.
Thanks for the feedback! Supporting single locks is easy, I just
didn't think it was a good idea at first but it looks like I missed
some cases.
I can implement two types of locking functions: one on `WwMutex` where
`WwMutex::lock` handles a single lock without a context, and another on
`WwAcquireCtx`, where `WwAcquireCtx::lock` is used for handling
multiple contexts.
e.g.,:
let mutex = WwMutex::new(...);
mutex.lock(); // without context, for single locks
let ctx = WwAcquireCtx::new(...);
ctx.lock(mutex); // with context, for multiple locks
What do you think?
Regards,
Onur
Powered by blists - more mailing lists