lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Apr 2015 19:17:34 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Waiman Long <Waiman.Long@...com>
Cc:	Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
	Jason Low <jason.low2@...com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Scott J Norton <scott.norton@...com>,
	Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v3] locking/rwsem: reduce spinlock contention in wakeup
 after up_read/up_write

On Fri, Apr 24, 2015 at 01:54:29PM -0400, Waiman Long wrote:
> @@ -478,7 +515,40 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
>  {
>  	unsigned long flags;
>  
> -	raw_spin_lock_irqsave(&sem->wait_lock, flags);
> +	/*
> +	 * If a spinner is present, it is not necessary to do the wakeup.
> +	 * Try to do wakeup only if the trylock succeeds to minimize
> +	 * spinlock contention which may introduce too much delay in the
> +	 * unlock operation.
> +	 *
> +	 *    spinning writer		up_write/up_read caller
> +	 *    ---------------		-----------------------
> +	 * [S]   osq_unlock()		[L]   osq
> +	 *	 MB			      MB
> +	 * [RmW] rwsem_try_write_lock() [RmW] spin_trylock(wait_lock)
> +	 *
> +	 * Here, it is important to make sure that there won't be a missed
> +	 * wakeup while the rwsem is free and the only spinning writer goes
> +	 * to sleep without taking the rwsem. In case the spinning writer is
> +	 * just going to break out of the waiting loop, it will still do a
> +	 * trylock in rwsem_down_write_failed() before sleeping. IOW, if
> +	 * rwsem_has_spinner() is true, it will  guarantee at least one
> +	 * trylock attempt on the rwsem.
> +	 */
> +	if (!rwsem_has_spinner(sem)) {
> +		raw_spin_lock_irqsave(&sem->wait_lock, flags);
> +	} else {
> +		/*
> +		 * rwsem_has_spinner() is an atomic read while spin_trylock
> +		 * does not guarantee a full memory barrier. Insert a memory
> +		 * barrier here to make sure that wait_lock isn't read until
> +		 * after osq.
> +		 * Note: smp_rmb__after_atomic() should be used if available.
> +		 */
> +		smp_mb__after_atomic();
> +		if (!raw_spin_trylock_irqsave(&sem->wait_lock, flags))
> +			return sem;
> +	}
>  
>  	/* do nothing if list empty */
>  	if (!list_empty(&sem->wait_list))

To me it makes more sense to reverse these two branches (identical code
wise of course) and put the special case first.

Alternatively we could also do something like the below, which to my
eyes looks a little better still, but I don't care too much.

	if (rwsem_has_spinner(sem)) {
		/*
		 * comment ...
		 */
		 smp_rmb();
		 if (!raw_spin_trylock_irqsave(&sem->wait_lock, flags))
			return sem;
		 goto locked;
	}

	raw_spin_lock_irqsave(&sem->wait_lock, flags);
locked:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ