lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Dec 2016 17:55:44 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Nicolai Hähnle <nhaehnle@...il.com>
Cc:     linux-kernel@...r.kernel.org,
        Nicolai Hähnle <Nicolai.Haehnle@....com>,
        Ingo Molnar <mingo@...hat.com>,
        Maarten Lankhorst <dev@...ankhorst.nl>,
        Daniel Vetter <daniel@...ll.ch>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order

On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
> @@ -693,8 +748,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>  		 * mutex_unlock() handing the lock off to us, do a trylock
>  		 * before testing the error conditions to make sure we pick up
>  		 * the handoff.
> +		 *
> +		 * For w/w locks, we always need to do this even if we're not
> +		 * currently the first waiter, because we may have been the
> +		 * first waiter during the unlock.
>  		 */
> -		if (__mutex_trylock(lock, first))
> +		if (__mutex_trylock(lock, use_ww_ctx || first))
>  			goto acquired;

So I'm somewhat uncomfortable with this. The point is that with the
.handoff logic it is very easy to accidentally allow:

	mutex_lock(&a);
	mutex_lock(&a);

And I'm not sure this doesn't make that happen for ww_mutexes. We get to
this __mutex_trylock() without first having blocked.


>  		/*
> @@ -716,7 +775,20 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>  		spin_unlock_mutex(&lock->wait_lock, flags);
>  		schedule_preempt_disabled();
>  
> -		if (!first && __mutex_waiter_is_first(lock, &waiter)) {
> +		if (use_ww_ctx && ww_ctx) {
> +			/*
> +			 * Always re-check whether we're in first position. We
> +			 * don't want to spin if another task with a lower
> +			 * stamp has taken our position.
> +			 *
> +			 * We also may have to set the handoff flag again, if
> +			 * our position at the head was temporarily taken away.
> +			 */
> +			first = __mutex_waiter_is_first(lock, &waiter);
> +
> +			if (first)
> +				__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
> +		} else if (!first && __mutex_waiter_is_first(lock, &waiter)) {
>  			first = true;
>  			__mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
>  		}

So the point is that !ww_ctx entries are 'skipped' during the insertion
and therefore, if one becomes first, it must stay first?

> @@ -728,7 +800,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
>  		 * or we must see its unlock and acquire.
>  		 */
>  		if ((first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, true)) ||
> -		     __mutex_trylock(lock, first))
> +		     __mutex_trylock(lock, use_ww_ctx || first))
>  			break;
>  
>  		spin_lock_mutex(&lock->wait_lock, flags);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ