[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1406733032.3544.2.camel@j-VirtualBox>
Date: Wed, 30 Jul 2014 08:10:32 -0700
From: Jason Low <jason.low2@...com>
To: Davidlohr Bueso <davidlohr@...com>
Cc: peterz@...radead.org, mingo@...nel.org, aswin@...com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH -tip/master 2/7] locking/mutex: Document quick lock
release when unlocking
On Sun, 2014-07-27 at 22:18 -0700, Davidlohr Bueso wrote:
> When unlocking, we always want to reach the slowpath with the lock's counter
> indicating it is unlocked. -- as returned by the asm fastpath call or by
> explicitly setting it. While doing so, at least in theory, we can optimize
> and allow faster lock stealing.
>
> This is not immediately obvious and deserves to be documented.
>
> Signed-off-by: Davidlohr Bueso <davidlohr@...com>
> ---
> kernel/locking/mutex.c | 14 +++++++++++---
> 1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index ad0e333..7a9be39 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -676,7 +676,8 @@ EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible);
> #endif
>
> /*
> - * Release the lock, slowpath:
> + * Release the lock, slowpath.
> + * At this point, the lock counter is 0 or negative.
Hmm, so in the !__mutex_slowpath_needs_to_unlock() case, we could enter
this function with the lock count == 1 right?
> */
> static inline void
> __mutex_unlock_common_slowpath(struct mutex *lock, int nested)
> @@ -684,9 +685,16 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int nested)
> unsigned long flags;
>
> /*
> - * some architectures leave the lock unlocked in the fastpath failure
> + * As a performance measurement, release the lock before doing other
> + * wakeup related duties to follow. This allows other tasks to acquire
> + * the lock sooner, while still handling cleanups in past unlock calls.
> + * This can be done as we do not enforce strict equivalence between the
> + * mutex counter and wait_list.
> + *
> + *
> + * Some architectures leave the lock unlocked in the fastpath failure
> * case, others need to leave it locked. In the later case we have to
> - * unlock it here
> + * unlock it here.
> */
> if (__mutex_slowpath_needs_to_unlock())
> atomic_set(&lock->count, 1);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists