lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Oct 2016 14:45:50 -0400
From:   Waiman Long <waiman.long@....com>
To:     Peter Zijlstra <peterz@...radead.org>
CC:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Jason Low <jason.low2@....com>,
        Ding Tianhong <dingtianhong@...wei.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Will Deacon <Will.Deacon@....com>,
        Ingo Molnar <mingo@...hat.com>,
        Imre Deak <imre.deak@...el.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Terry Rudd <terry.rudd@....com>,
        "Paul E. McKenney" <paulmck@...ibm.com>,
        Jason Low <jason.low2@...com>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: [PATCH -v4 5/8] locking/mutex: Add lock handoff to avoid starvation

On 10/07/2016 10:52 AM, Peter Zijlstra wrote:
>   /*
>    * Actual trylock that will work on any unlocked state.
> + *
> + * When setting the owner field, we must preserve the low flag bits.
> + *
> + * Be careful with @handoff, only set that in a wait-loop (where you set
> + * HANDOFF) to avoid recursive lock attempts.
>    */
> -static inline bool __mutex_trylock(struct mutex *lock)
> +static inline bool __mutex_trylock(struct mutex *lock, const bool handoff)
>   {
>   	unsigned long owner, curr = (unsigned long)current;
>
>   	owner = atomic_long_read(&lock->owner);
>   	for (;;) { /* must loop, can race against a flag */
> -		unsigned long old;
> +		unsigned long old, flags = __owner_flags(owner);
> +
> +		if (__owner_task(owner)) {
> +			if (handoff&&  unlikely(__owner_task(owner) == current)) {
> +				/*
> +				 * Provide ACQUIRE semantics for the lock-handoff.
> +				 *
> +				 * We cannot easily use load-acquire here, since
> +				 * the actual load is a failed cmpxchg, which
> +				 * doesn't imply any barriers.
> +				 *
> +				 * Also, this is a fairly unlikely scenario, and
> +				 * this contains the cost.
> +				 */

I am not so sure about your comment here. I guess you are referring to 
the atomic_long_cmpxchg_acquire below for the failed cmpxchg. However, 
it is also possible that the path can be triggered on the first round 
without cmpxchg. Maybe we can do a load_acquire on the owner again to 
satisfy this requirement without a smp_mb().

> +				smp_mb(); /* ACQUIRE */
> +				return true;
> +			}
>
> -		if (__owner_task(owner))
>   			return false;
> +		}
>
> -		old = atomic_long_cmpxchg_acquire(&lock->owner, owner,
> -						  curr | __owner_flags(owner));
> +		/*
> +		 * We set the HANDOFF bit, we must make sure it doesn't live
> +		 * past the point where we acquire it. This would be possible
> +		 * if we (accidentally) set the bit on an unlocked mutex.
> +		 */
> +		if (handoff)
> +			flags&= ~MUTEX_FLAG_HANDOFF;
> +
> +		old = atomic_long_cmpxchg_acquire(&lock->owner, owner, curr | flags);
>   		if (old == owner)
>   			return true;
>
>

Other than that, the code is fine.

Reviewed-by: Waiman Long <Waiman.Long@....com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ