lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DBRVNP4MM5KO.3IXLMXKGK4XTS@kernel.org>
Date: Sat, 02 Aug 2025 12:42:03 +0200
From: "Benno Lossin" <lossin@...nel.org>
To: "Daniel Almeida" <daniel.almeida@...labora.com>
Cc: "Onur" <work@...rozkan.dev>, "Boqun Feng" <boqun.feng@...il.com>,
 <linux-kernel@...r.kernel.org>, <rust-for-linux@...r.kernel.org>,
 <ojeda@...nel.org>, <alex.gaynor@...il.com>, <gary@...yguo.net>,
 <a.hindborg@...nel.org>, <aliceryhl@...gle.com>, <tmgross@...ch.edu>,
 <dakr@...nel.org>, <peterz@...radead.org>, <mingo@...hat.com>,
 <will@...nel.org>, <longman@...hat.com>, <felipe_life@...e.com>,
 <daniel@...lak.dev>, <bjorn3_gh@...tonmail.com>
Subject: Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree

On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
> Hi Benno,
>
>> On 7 Jul 2025, at 16:48, Benno Lossin <lossin@...nel.org> wrote:
>> 
>> On Mon Jul 7, 2025 at 8:06 PM CEST, Onur wrote:
>>> On Mon, 07 Jul 2025 17:31:10 +0200
>>> "Benno Lossin" <lossin@...nel.org> wrote:
>>> 
>>>> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
>>>>> On Mon, 23 Jun 2025 17:14:37 +0200
>>>>> "Benno Lossin" <lossin@...nel.org> wrote:
>>>>> 
>>>>>>> We also need to take into consideration that the user want to
>>>>>>> drop any lock in the sequence? E.g. the user acquires a, b and
>>>>>>> c, and then drop b, and then acquires d. Which I think is
>>>>>>> possible for ww_mutex.
>>>>>> 
>>>>>> Hmm what about adding this to the above idea?:
>>>>>> 
>>>>>>    impl<'a, Locks> WwActiveCtx<'a, Locks>
>>>>>>    where
>>>>>>        Locks: Tuple
>>>>>>    {
>>>>>>        fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>>>>>> WwActiveCtx<'a, L2>; }
>>>>>> 
>>>>>> Then you can do:
>>>>>> 
>>>>>>    let (a, c, d) = ctx.begin()
>>>>>>        .lock(a)
>>>>>>        .lock(b)
>>>>>>        .lock(c)
>>>>>>        .custom(|(a, _, c)| (a, c))
>>>>>>        .lock(d)
>>>>>>        .finish();
>>>>> 
>>>>> 
>>>>> Instead of `begin` and `custom`, why not something like this:
>>>>> 
>>>>> let (a, c, d) = ctx.init()
>>>>>     .lock(a)
>>>>>            .lock(b)
>>>>>            .lock(c)
>>>>>            .unlock(b)
>>>>>            .lock(d)
>>>>>            .finish();
>>>>> 
>>>>> Instead of `begin`, `init` would be better naming to imply `fini`
>>>>> on the C side, and `unlock` instead of `custom` would make the
>>>>> intent clearer when dropping locks mid chain.
>> 
>> Also, I'm not really fond of the name `init`, how about `enter`?
>> 
>>>> 
>>>> I don't think that this `unlock` operation will work. How would you
>>>> implement it?
>>> 
>>> 
>>> We could link mutexes to locks using some unique value, so that we can
>>> access locks by passing mutexes (though that sounds a bit odd).
>>> 
>>> Another option would be to unlock by the index, e.g.,:
>>> 
>>> let (a, c) = ctx.init()
>>>     .lock(a)
>>>            .lock(b)
>>>            .unlock::<1>()
>
> Why do we need this random unlock() here? We usually want to lock everything
> and proceed, or otherwise backoff completely so that someone else can proceed.

No the `unlock` was just to show that we could interleave locking and
unlocking.

> One thing I didn’t understand with your approach: is it amenable to loops?
> i.e.: are things like drm_exec() implementable?

I don't think so, see also my reply here:

    https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org

The type-based approach with tuples doesn't handle dynamic number of
locks.

> /**
>  * drm_exec_until_all_locked - loop until all GEM objects are locked
>  * @exec: drm_exec object
>  *
>  * Core functionality of the drm_exec object. Loops until all GEM objects are
>  * locked and no more contention exists. At the beginning of the loop it is
>  * guaranteed that no GEM object is locked.
>  *
>  * Since labels can't be defined local to the loops body we use a jump pointer
>  * to make sure that the retry is only used from within the loops body.
>  */
> #define drm_exec_until_all_locked(exec)					\
> __PASTE(__drm_exec_, __LINE__):						\
> 	for (void *__drm_exec_retry_ptr; ({				\
> 		__drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
> 		(void)__drm_exec_retry_ptr;				\
> 		drm_exec_cleanup(exec);					\
> 	});)

My understanding of C preprocessor macros is not good enough to parse or
understand this :( What is that `__PASTE` thing?

> In fact, perhaps we can copy drm_exec, basically? i.e.:
>
> /**
>  * struct drm_exec - Execution context
>  */
> struct drm_exec {
> 	/**
> 	 * @flags: Flags to control locking behavior
> 	 */
> 	u32                     flags;
>
> 	/**
> 	 * @ticket: WW ticket used for acquiring locks
> 	 */
> 	struct ww_acquire_ctx	ticket;
>
> 	/**
> 	 * @num_objects: number of objects locked
> 	 */
> 	unsigned int		num_objects;
>
> 	/**
> 	 * @max_objects: maximum objects in array
> 	 */
> 	unsigned int		max_objects;
>
> 	/**
> 	 * @objects: array of the locked objects
> 	 */
> 	struct drm_gem_object	**objects;
>
> 	/**
> 	 * @contended: contended GEM object we backed off for
> 	 */
> 	struct drm_gem_object	*contended;
>
> 	/**
> 	 * @prelocked: already locked GEM object due to contention
> 	 */
> 	struct drm_gem_object *prelocked;
> };
>
> This is GEM-specific, but we could perhaps implement the same idea by
> tracking ww_mutexes instead of GEM objects.

But this would only work for `Vec<WwMutex<T>>`, right?

> Also, I’d appreciate if the rollback logic could be automated, which is
> what you’re trying to do, so +1 to your idea.

Good to see that it seems useful to you :)

---
Cheers,
Benno

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ