lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Oct 2016 16:28:01 +0100
From:   Will Deacon <will.deacon@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Waiman Long <waiman.long@....com>,
        Jason Low <jason.low2@....com>,
        Ding Tianhong <dingtianhong@...wei.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        Imre Deak <imre.deak@...el.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Terry Rudd <terry.rudd@....com>,
        "Paul E. McKenney" <paulmck@...ibm.com>,
        Jason Low <jason.low2@...com>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: [PATCH -v4 8/8] locking/mutex: Enable optimistic spinning of
 woken waiter

On Fri, Oct 07, 2016 at 04:52:51PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@....com>
> 
> This patch makes the waiter that sets the HANDOFF flag start spinning
> instead of sleeping until the handoff is complete or the owner
> sleeps. Otherwise, the handoff will cause the optimistic spinners to
> abort spinning as the handed-off owner may not be running.
> 
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Tim Chen <tim.c.chen@...ux.intel.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Imre Deak <imre.deak@...el.com>
> Cc: Jason Low <jason.low2@....com>
> Cc: "Paul E. McKenney" <paulmck@...ibm.com>
> Cc: Ding Tianhong <dingtianhong@...wei.com>
> Cc: Davidlohr Bueso <dave@...olabs.net>
> Cc: Will Deacon <Will.Deacon@....com>
> Signed-off-by: Waiman Long <Waiman.Long@....com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  kernel/locking/mutex.c |   77 ++++++++++++++++++++++++++++++++++---------------
>  1 file changed, 54 insertions(+), 23 deletions(-)
> 
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -416,24 +416,39 @@ static inline int mutex_can_spin_on_owne
>   *
>   * Returns true when the lock was taken, otherwise false, indicating
>   * that we need to jump to the slowpath and sleep.
> + *
> + * The waiter flag is set to true if the spinner is a waiter in the wait
> + * queue. The waiter-spinner will spin on the lock directly and concurrently
> + * with the spinner at the head of the OSQ, if present, until the owner is
> + * changed to itself.
>   */
>  static bool mutex_optimistic_spin(struct mutex *lock,
> -				  struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
> +				  struct ww_acquire_ctx *ww_ctx,
> +				  const bool use_ww_ctx, const bool waiter)
>  {
>  	struct task_struct *task = current;
>  
> -	if (!mutex_can_spin_on_owner(lock))
> -		goto done;
> +	if (!waiter) {
> +		/*
> +		 * The purpose of the mutex_can_spin_on_owner() function is
> +		 * to eliminate the overhead of osq_lock() and osq_unlock()
> +		 * in case spinning isn't possible. As a waiter-spinner
> +		 * is not going to take OSQ lock anyway, there is no need
> +		 * to call mutex_can_spin_on_owner().
> +		 */
> +		if (!mutex_can_spin_on_owner(lock))
> +			goto fail;
>  
> -	/*
> -	 * In order to avoid a stampede of mutex spinners trying to
> -	 * acquire the mutex all at once, the spinners need to take a
> -	 * MCS (queued) lock first before spinning on the owner field.
> -	 */
> -	if (!osq_lock(&lock->osq))
> -		goto done;
> +		/*
> +		 * In order to avoid a stampede of mutex spinners trying to
> +		 * acquire the mutex all at once, the spinners need to take a
> +		 * MCS (queued) lock first before spinning on the owner field.
> +		 */
> +		if (!osq_lock(&lock->osq))
> +			goto fail;
> +	}
>  
> -	while (true) {
> +	for (;;) {
>  		struct task_struct *owner;
>  
>  		if (use_ww_ctx && ww_ctx->acquired > 0) {
> @@ -449,7 +464,7 @@ static bool mutex_optimistic_spin(struct
>  			 * performed the optimistic spinning cannot be done.
>  			 */
>  			if (READ_ONCE(ww->ctx))
> -				break;
> +				goto fail_unlock;
>  		}
>  
>  		/*
> @@ -457,15 +472,20 @@ static bool mutex_optimistic_spin(struct
>  		 * release the lock or go to sleep.
>  		 */
>  		owner = __mutex_owner(lock);
> -		if (owner && !mutex_spin_on_owner(lock, owner))
> -			break;
> +		if (owner) {
> +			if (waiter && owner == task) {
> +				smp_mb(); /* ACQUIRE */

Hmm, is this barrier actually needed? This only happens on the handoff path,
and we take the wait_lock immediately after this succeeds anyway. That
control dependency, coupled with the acquire semantics of the spin_lock,
should be sufficient, no?

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ