lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Jan 2015 17:51:38 -0800
From:	Tim Chen <tim.c.chen@...ux.intel.com>
To:	Davidlohr Bueso <dave@...olabs.net>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Jason Low <jason.low2@...com>,
	Michel Lespinasse <walken@...gle.com>,
	linux-kernel@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH 4/5] locking/rwsem: Avoid deceiving lock spinners

On Fri, 2015-01-30 at 01:14 -0800, Davidlohr Bueso wrote:
> When readers hold the semaphore, the ->owner is nil. As such,
> and unlike mutexes, '!owner' does not necessarily imply that
> the lock is free. This will cause writers to potentially spin
> excessively as they've been mislead to thinking they have a
> chance of acquiring the lock, instead of blocking.
> 
> This patch therefore enhances the counter check when the owner
> is not set by the time we've broken out of the loop. Otherwise
> we can return true as a new owner has the lock and thus we want
> to continue spinning. While at it, we can make rwsem_spin_on_owner()
> less ambiguos and return right away under need_resched conditions.
> 
> Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
> ---
>  kernel/locking/rwsem-xadd.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index 07713e5..1c0d11e 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -337,21 +337,30 @@ static inline bool owner_running(struct rw_semaphore *sem,
>  static noinline
>  bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
>  {
> +	long count;
> +
>  	rcu_read_lock();
>  	while (owner_running(sem, owner)) {
> -		if (need_resched())
> -			break;
> +		/* abort spinning when need_resched */
> +		if (need_resched()) {
> +			rcu_read_unlock();
> +			return false;
> +		}
>  
>  		cpu_relax_lowlatency();
>  	}
>  	rcu_read_unlock();
>  
> +	if (READ_ONCE(sem->owner))
> +		return true; /* new owner, continue spinning */
> +

Do you have some comparison data of whether it is more advantageous
to continue spinning when owner changes?  After the above change, 
rwsem will behave more like a spin lock for write lock and 
will keep spinning when the lock changes ownership. Now during heavy
lock contention, if we don't continue spinning and sleep, we may use the
clock cycles for actually running other threads.  This was the
assumption in the older code.  The trade
off may or may not be worth it depending on how big the thread
switching overhead is and how long the lock is held.  

It will be good to have a few data points to make sure
that this change is beneficial.

>  	/*
> -	 * We break out the loop above on need_resched() or when the
> -	 * owner changed, which is a sign for heavy contention. Return
> -	 * success only when sem->owner is NULL.
> +	 * When the owner is not set, the lock could be free or
> +	 * held by readers. Check the counter to verify the
> +	 * state.
>  	 */
> -	return sem->owner == NULL;
> +	count = READ_ONCE(sem->count);
> +	return (count == 0 || count == RWSEM_WAITING_BIAS);
>  }
>  
>  static bool rwsem_optimistic_spin(struct rw_semaphore *sem)

Thanks.

Tim

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ