[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250902195328.6293b5d4@nimda.home>
Date: Tue, 2 Sep 2025 19:53:28 +0300
From: Onur <work@...rozkan.dev>
To: Daniel Almeida <daniel.almeida@...labora.com>
Cc: Benno Lossin <lossin@...nel.org>, Lyude Paul <lyude@...hat.com>,
linux-kernel@...r.kernel.org, rust-for-linux@...r.kernel.org,
ojeda@...nel.org, alex.gaynor@...il.com, boqun.feng@...il.com,
gary@...yguo.net, a.hindborg@...nel.org, aliceryhl@...gle.com,
tmgross@...ch.edu, dakr@...nel.org, peterz@...radead.org, mingo@...hat.com,
will@...nel.org, longman@...hat.com, felipe_life@...e.com,
daniel@...lak.dev, bjorn3_gh@...tonmail.com
Subject: Re: [PATCH v5 0/3] rust: add `ww_mutex` support
On Thu, 14 Aug 2025 15:22:57 -0300
Daniel Almeida <daniel.almeida@...labora.com> wrote:
>
> Hi Onur,
>
> > On 14 Aug 2025, at 12:56, Onur <work@...rozkan.dev> wrote:
> >
> > On Thu, 14 Aug 2025 09:38:38 -0300
> > Daniel Almeida <daniel.almeida@...labora.com> wrote:
> >
> >> Hi Onur,
> >>
> >>> On 14 Aug 2025, at 08:13, Onur Özkan <work@...rozkan.dev> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> I have been brainstorming on the auto-unlocking (on dynamic number
> >>> of mutexes) idea we have been discussing for some time.
> >>>
> >>> There is a challange with how we handle lock guards and my current
> >>> thought is to remove direct data dereferencing from guards.
> >>> Instead, data access would only be possible through a fallible
> >>> method (e.g., `try_get`). If the guard is no longer valid, this
> >>> method would fail to not allow data-accessing after auto-unlock.
> >>>
> >>> In practice, it would work like this:
> >>>
> >>> let a_guard = ctx.lock(mutex_a)?;
> >>> let b_guard = ctx.lock(mutex_b)?;
> >>>
> >>> // Suppose user tries to lock `mutex_c` without aborting the
> >>> // entire function (for some reason). This means that even on
> >>> // failure, `a_guard` and `b_guard` will still be accessible.
> >>> if let Ok(c_guard) = ctx.lock(mutex_c) {
> >>> // ...some logic
> >>> }
> >>>
> >>> let a_data = a_guard.try_get()?;
> >>> let b_data = b_guard.try_get()?;
> >>
> >> Can you add more code here? How is this going to look like with the
> >> two closures we’ve been discussing?
> >
> > Didn't we said that tuple-based closures are not sufficient when
> > dealing with a dynamic number of locks (ref [1]) and ww_mutex is
> > mostly used with dynamic locks? I thought implementing that
> > approach is not worth it (at least for now) because of that.
> >
> > [1]:
> > https://lore.kernel.org/all/DBS8REY5E82S.3937FAHS25ANA@kernel.org
> >
> > Regards,
> > Onur
>
>
>
> I am referring to this [0]. See the discussion and itemized list at
> the end.
>
> To recap, I am proposing a separate type that is similar to drm_exec,
> and that implements this:
>
> ```
> a) run a user closure where the user can indicate which ww_mutexes
> they want to lock b) keep track of the objects above
> c) keep track of whether a contention happened
> d) rollback if a contention happened, releasing all locks
> e) rerun the user closure from a clean slate after rolling back
> f) run a separate user closure whenever we know that all objects have
> been locked. ```
>
Finally, I was able to allocate some time to work on this week. The
implementation covers all the items you listed above.
I am sharing some of the unit tests from my work. My intention is to
demonstrate the user API and I would like your feedback on whether
anything should be changed before I send the v6 patch.
#[test]
fn test_with_different_input_type() -> Result {
stack_pin_init!(let class =
WwClass::new_wound_wait(c_str!("lock_all_ok")));
let mu1 = Arc::pin_init(WwMutex::new(1, &class), GFP_KERNEL)?;
let mu2 = Arc::pin_init(WwMutex::new("hello", &class),
GFP_KERNEL)?;
lock_all(
&class,
|ctx| {
ctx.lock(&mu1)?;
ctx.lock(&mu2)?;
Ok(())
},
|ctx| {
ctx.with_locked(&mu1, |v| assert_eq!(*v, 1))?;
ctx.with_locked(&mu2, |v| assert_eq!(*v, "hello"))?;
Ok(())
},
)?;
Ok(())
}
#[test]
fn test_lock_all_retries_on_deadlock() -> Result {
stack_pin_init!(let class =
WwClass::new_wound_wait(c_str!("lock_all_retry")));
let mu = Arc::pin_init(WwMutex::new(99, &class), GFP_KERNEL)?;
let mut first_try = true;
let res = lock_all(
&class,
|ctx| {
if first_try {
first_try = false;
// simulate deadlock on first attempt
return Err(EDEADLK);
}
ctx.lock(&mu)
},
|ctx| {
ctx.with_locked(&mu, |v| {
*v += 1;
*v
})
},
)?;
assert_eq!(res, 100);
Ok(())
}
#[test]
fn test_with_locked_on_unlocked_mutex() -> Result {
stack_pin_init!(let class =
WwClass::new_wound_wait(c_str!("with_unlocked_mutex")));
let mu = Arc::pin_init(WwMutex::new(5, &class), GFP_KERNEL)?;
let mut ctx = ExecContext::new(&class)?;
let ecode = ctx.with_locked(&mu, |_v| {}).unwrap_err();
assert_eq!(EINVAL, ecode);
Ok(())
}
Please let me know if this looks fine in terms of user API so
I can make any necessary adjustments before sending v6.
Regards,
Onur
Powered by blists - more mailing lists