lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250901130542.32b051bc@nimda.home>
Date: Mon, 1 Sep 2025 13:05:42 +0300
From: Onur Özkan <work@...rozkan.dev>
To: Daniel Almeida <daniel.almeida@...labora.com>
Cc: Benno Lossin <lossin@...nel.org>, Lyude Paul <lyude@...hat.com>,
 linux-kernel@...r.kernel.org, rust-for-linux@...r.kernel.org,
 ojeda@...nel.org, alex.gaynor@...il.com, boqun.feng@...il.com,
 gary@...yguo.net, a.hindborg@...nel.org, aliceryhl@...gle.com,
 tmgross@...ch.edu, dakr@...nel.org, peterz@...radead.org, mingo@...hat.com,
 will@...nel.org, longman@...hat.com, felipe_life@...e.com,
 daniel@...lak.dev, bjorn3_gh@...tonmail.com
Subject: Re: [PATCH v5 0/3] rust: add `ww_mutex` support

On Mon, 18 Aug 2025 15:56:28 +0300
Onur Özkan <work@...rozkan.dev> wrote:

> On Thu, 14 Aug 2025 15:22:57 -0300
> Daniel Almeida <daniel.almeida@...labora.com> wrote:
> 
> > 
> > Hi Onur,
> > 
> > > On 14 Aug 2025, at 12:56, Onur <work@...rozkan.dev> wrote:
> > > 
> > > On Thu, 14 Aug 2025 09:38:38 -0300
> > > Daniel Almeida <daniel.almeida@...labora.com> wrote:
> > > 
> > >> Hi Onur,
> > >> 
> > >>> On 14 Aug 2025, at 08:13, Onur Özkan <work@...rozkan.dev> wrote:
> > >>> 
> > >>> Hi all,
> > >>> 
> > >>> I have been brainstorming on the auto-unlocking (on dynamic
> > >>> number of mutexes) idea we have been discussing for some time.
> > >>> 
> > >>> There is a challange with how we handle lock guards and my
> > >>> current thought is to remove direct data dereferencing from
> > >>> guards. Instead, data access would only be possible through a
> > >>> fallible method (e.g., `try_get`). If the guard is no longer
> > >>> valid, this method would fail to not allow data-accessing after
> > >>> auto-unlock.
> > >>> 
> > >>> In practice, it would work like this:
> > >>> 
> > >>> let a_guard = ctx.lock(mutex_a)?;
> > >>> let b_guard = ctx.lock(mutex_b)?;
> > >>> 
> > >>> // Suppose user tries to lock `mutex_c` without aborting the
> > >>> // entire function (for some reason). This means that even on
> > >>> // failure, `a_guard` and `b_guard` will still be accessible.
> > >>> if let Ok(c_guard) = ctx.lock(mutex_c) {
> > >>>    // ...some logic
> > >>> }
> > >>> 
> > >>> let a_data = a_guard.try_get()?;
> > >>> let b_data = b_guard.try_get()?;
> > >> 
> > >> Can you add more code here? How is this going to look like with
> > >> the two closures we’ve been discussing?
> > > 
> > > Didn't we said that tuple-based closures are not sufficient when
> > > dealing with a dynamic number of locks (ref [1]) and ww_mutex is
> > > mostly used with dynamic locks? I thought implementing that
> > > approach is not worth it (at least for now) because of that.
> > > 
> > > [1]:
> > > https://lore.kernel.org/all/DBS8REY5E82S.3937FAHS25ANA@kernel.org
> > > 
> > > Regards,
> > > Onur
> > 
> > 
> > 
> > I am referring to this [0]. See the discussion and itemized list at
> > the end.
> > 
> > To recap, I am proposing a separate type that is similar to
> > drm_exec, and that implements this:
> > 
> > ```
> > a) run a user closure where the user can indicate which ww_mutexes
> > they want to lock b) keep track of the objects above
> > c) keep track of whether a contention happened
> > d) rollback if a contention happened, releasing all locks
> > e) rerun the user closure from a clean slate after rolling back
> > f) run a separate user closure whenever we know that all objects
> > have been locked. ```
> > 
> > In other words, we need to run a closure to let the user implement a
> > given locking strategy, and then one closure that runs when the user
> > signals that there are no more locks to take.
> > 
> > What I said is different from what Benno suggested here:
> > 
> > >>>>>>    let (a, c, d) = ctx.begin()
> > >>>>>>        .lock(a)
> > >>>>>>        .lock(b)
> > >>>>>>        .lock(c)
> > >>>>>>        .custom(|(a, _, c)| (a, c))
> > >>>>>>        .lock(d)
> > >>>>>>        .finish();
> > 
> > i.e.: here is a brief example of how the API should be used by
> > clients:
> > 
> > ```
> > // The Context keeps track of which locks were successfully taken.
> > let locking_algorithm = |ctx: &Context| {
> >   // client-specific code, likely some loop trying to acquire
> > multiple locks: //
> >   // note that it does not _have_ to be a loop, though. It up to the
> > clients to // provide a suitable implementation here.
> >   for (..) {
> >     ctx.lock(foo); // If this succeeds, the context will add  "foo"
> > to the list of taken locks. }
> > 
> >   // if this closure returns EDEADLK, then our abstraction must
> > rollback, and // run it again.
> > };
> > 
> > // This runs when the closure above has indicated that there are no
> > more locks // to take.
> > let on_all_locks_taken = |ctx: &Context| {
> >   // everything is locked here, give access to the data in the
> > guards. };
> > 
> > ctx.lock_all(locking_algorithm, on_all_locks_taken)?;
> > ```
> > 
> > Yes, this will allocate but that is fine because drm_exec allocates
> > as well.
> > 
> > We might be able to give more control of when the allocation happens
> > if the number of locks is known in advance, e.g.:
> > 
> > ```
> > struct Context<T> {
> >   taken_locks: KVec<Guard<T>>,
> > }
> > 
> > impl<T> Context<T> {
> >   fn prealloc_slots(num_slots: usize, flags: ...) -> Result<Self> {
> >     let taken_locks = ... // pre-alloc a KVec here. 
> >     Self {
> >       taken_slots,
> >     } 
> >   }
> > }
> > ```
> > 
> > The main point is that this API is optional. It builds a lot of
> > convenience of top of the Rust WWMutex abstraction, but no one is
> > forced to use it.
> > 
> > IOW: What I said should be implementable with a dynamic number of
> > locks. Please let me know if I did not explain this very well. 
> > 
> > [0]:
> > https://lore.kernel.org/rust-for-linux/8B1FB608-7D43-4DD9-8737-DCE59ED74CCA@collabora.com/
> 
> Hi Daniel,
> 
> Thank you for repointing it, I must have missed your previour mail.
> 
> It seems crystal clear, I will review this mail in detail when I am
> working on this patch again.
> 
> Regards,
> Onur

Hi,

How should the modules be structured? I am thinking something like:

    rust/kernel/sync/lock/ww_mutex/mod.rs
    rust/kernel/sync/lock/ww_mutex/core.rs
    rust/kernel/sync/lock/ww_mutex/ww_exec.rs

In core, I would include only the essential parts (e.g., wrapper types
and associated functions) and in ww_exec, I would provide a higher-level
API similar to drm_exec (more idiomatic rusty version).

Does this make sense?


-Onur

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ