lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <FF481535-86EF-41EB-830A-1DA2434AAEA0@collabora.com>
Date: Fri, 1 Aug 2025 18:22:43 -0300
From: Daniel Almeida <daniel.almeida@...labora.com>
To: Benno Lossin <lossin@...nel.org>
Cc: Onur <work@...rozkan.dev>,
 Boqun Feng <boqun.feng@...il.com>,
 linux-kernel@...r.kernel.org,
 rust-for-linux@...r.kernel.org,
 ojeda@...nel.org,
 alex.gaynor@...il.com,
 gary@...yguo.net,
 a.hindborg@...nel.org,
 aliceryhl@...gle.com,
 tmgross@...ch.edu,
 dakr@...nel.org,
 peterz@...radead.org,
 mingo@...hat.com,
 will@...nel.org,
 longman@...hat.com,
 felipe_life@...e.com,
 daniel@...lak.dev,
 bjorn3_gh@...tonmail.com
Subject: Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree

Hi Benno,

> On 7 Jul 2025, at 16:48, Benno Lossin <lossin@...nel.org> wrote:
> 
> On Mon Jul 7, 2025 at 8:06 PM CEST, Onur wrote:
>> On Mon, 07 Jul 2025 17:31:10 +0200
>> "Benno Lossin" <lossin@...nel.org> wrote:
>> 
>>> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
>>>> On Mon, 23 Jun 2025 17:14:37 +0200
>>>> "Benno Lossin" <lossin@...nel.org> wrote:
>>>> 
>>>>>> We also need to take into consideration that the user want to
>>>>>> drop any lock in the sequence? E.g. the user acquires a, b and
>>>>>> c, and then drop b, and then acquires d. Which I think is
>>>>>> possible for ww_mutex.
>>>>> 
>>>>> Hmm what about adding this to the above idea?:
>>>>> 
>>>>>    impl<'a, Locks> WwActiveCtx<'a, Locks>
>>>>>    where
>>>>>        Locks: Tuple
>>>>>    {
>>>>>        fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>>>>> WwActiveCtx<'a, L2>; }
>>>>> 
>>>>> Then you can do:
>>>>> 
>>>>>    let (a, c, d) = ctx.begin()
>>>>>        .lock(a)
>>>>>        .lock(b)
>>>>>        .lock(c)
>>>>>        .custom(|(a, _, c)| (a, c))
>>>>>        .lock(d)
>>>>>        .finish();
>>>> 
>>>> 
>>>> Instead of `begin` and `custom`, why not something like this:
>>>> 
>>>> let (a, c, d) = ctx.init()
>>>>     .lock(a)
>>>>            .lock(b)
>>>>            .lock(c)
>>>>            .unlock(b)
>>>>            .lock(d)
>>>>            .finish();
>>>> 
>>>> Instead of `begin`, `init` would be better naming to imply `fini`
>>>> on the C side, and `unlock` instead of `custom` would make the
>>>> intent clearer when dropping locks mid chain.
> 
> Also, I'm not really fond of the name `init`, how about `enter`?
> 
>>> 
>>> I don't think that this `unlock` operation will work. How would you
>>> implement it?
>> 
>> 
>> We could link mutexes to locks using some unique value, so that we can
>> access locks by passing mutexes (though that sounds a bit odd).
>> 
>> Another option would be to unlock by the index, e.g.,:
>> 
>> let (a, c) = ctx.init()
>>     .lock(a)
>>            .lock(b)
>>            .unlock::<1>()

Why do we need this random unlock() here? We usually want to lock everything
and proceed, or otherwise backoff completely so that someone else can proceed.

One thing I didn’t understand with your approach: is it amenable to loops?
i.e.: are things like drm_exec() implementable?

/**
 * drm_exec_until_all_locked - loop until all GEM objects are locked
 * @exec: drm_exec object
 *
 * Core functionality of the drm_exec object. Loops until all GEM objects are
 * locked and no more contention exists. At the beginning of the loop it is
 * guaranteed that no GEM object is locked.
 *
 * Since labels can't be defined local to the loops body we use a jump pointer
 * to make sure that the retry is only used from within the loops body.
 */
#define drm_exec_until_all_locked(exec)					\
__PASTE(__drm_exec_, __LINE__):						\
	for (void *__drm_exec_retry_ptr; ({				\
		__drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
		(void)__drm_exec_retry_ptr;				\
		drm_exec_cleanup(exec);					\
	});)

In fact, perhaps we can copy drm_exec, basically? i.e.:

/**
 * struct drm_exec - Execution context
 */
struct drm_exec {
	/**
	 * @flags: Flags to control locking behavior
	 */
	u32                     flags;

	/**
	 * @ticket: WW ticket used for acquiring locks
	 */
	struct ww_acquire_ctx	ticket;

	/**
	 * @num_objects: number of objects locked
	 */
	unsigned int		num_objects;

	/**
	 * @max_objects: maximum objects in array
	 */
	unsigned int		max_objects;

	/**
	 * @objects: array of the locked objects
	 */
	struct drm_gem_object	**objects;

	/**
	 * @contended: contended GEM object we backed off for
	 */
	struct drm_gem_object	*contended;

	/**
	 * @prelocked: already locked GEM object due to contention
	 */
	struct drm_gem_object *prelocked;
};

This is GEM-specific, but we could perhaps implement the same idea by
tracking ww_mutexes instead of GEM objects.

Also, I’d appreciate if the rollback logic could be automated, which is
what you’re trying to do, so +1 to your idea.

>>            .lock(c)
>>            .finish();
> 
> Hmm yeah that's interesting, but probably not very readable...
> 
>    let (a, c, e) = ctx
>        .enter()
>        .lock(a)
>        .lock(b)
>        .lock_with(|(a, b)| b.foo())
>        .unlock::<1>()
>        .lock(c)
>        .lock(d)
>        .lock_with(|(.., d)| d.bar())
>        .unlock::<2>();
> 
>> The index approach would require us to write something very similar
>> to `Tuple` (with macro obviously) you proposed sometime ago.
>> 
>> We could also just go back to your `custom` but find a better name
>> for it (such as `retain`).
> 
> Oh yeah the name was just a placeholder.
> 
> The advantage of custom is that the user can do anything in the closure.
> 
> ---
> Cheers,
> Benno

— Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ