lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 11 May 2016 11:26:02 -0700
From:	Jason Low <jason.low2@...com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	Davidlohr Bueso <dave@...olabs.net>,
	Scott J Norton <scott.norton@....com>,
	Waiman Long <Waiman.Long@....com>, peter@...leysoftware.com,
	jason.low2@....com
Subject: Re: [PATCH] locking/rwsem: Optimize write lock slowpath

On Wed, 2016-05-11 at 13:49 +0200, Peter Zijlstra wrote:
> On Mon, May 09, 2016 at 12:16:37PM -0700, Jason Low wrote:
> > When acquiring the rwsem write lock in the slowpath, we first try
> > to set count to RWSEM_WAITING_BIAS. When that is successful,
> > we then atomically add the RWSEM_WAITING_BIAS in cases where
> > there are other tasks on the wait list. This causes write lock
> > operations to often issue multiple atomic operations.
> > 
> > We can instead make the list_is_singular() check first, and then
> > set the count accordingly, so that we issue at most 1 atomic
> > operation when acquiring the write lock and reduce unnecessary
> > cacheline contention.
> > 
> > Signed-off-by: Jason Low <jason.low2@...com>
> > ---
> >  kernel/locking/rwsem-xadd.c | 20 +++++++++++++-------
> >  1 file changed, 13 insertions(+), 7 deletions(-)
> > 
> > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> > index df4dcb8..23c33e6 100644
> > --- a/kernel/locking/rwsem-xadd.c
> > +++ b/kernel/locking/rwsem-xadd.c
> > @@ -258,14 +258,20 @@ EXPORT_SYMBOL(rwsem_down_read_failed);
> >  static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
> >  {
> >  	/*
> > +	 * Avoid trying to acquire write lock if count isn't RWSEM_WAITING_BIAS.
> >  	 */
> > +	if (count != RWSEM_WAITING_BIAS)
> > +		return false;
> > +
> > +	/*
> > +	 * Acquire the lock by trying to set it to ACTIVE_WRITE_BIAS. If there
> > +	 * are other tasks on the wait list, we need to add on WAITING_BIAS.
> > +	 */
> > +	count = list_is_singular(&sem->wait_list) ?
> > +			RWSEM_ACTIVE_WRITE_BIAS :
> > +			RWSEM_ACTIVE_WRITE_BIAS + RWSEM_WAITING_BIAS;
> > +
> > +	if (cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, count) == RWSEM_WAITING_BIAS) {
> >  		rwsem_set_owner(sem);
> >  		return true;
> >  	}
> 
> Right; so that whole thing works because we're holding sem->wait_lock.
> Should we clarify that someplace?

Yup, we can mention that the rwsem_try_write_lock() function must be
called with the wait_lock held.

> Also; should we not make rw_semaphore::count an atomic_long_t and kill
> rwsem_atomic_{update,add}() ?

Right, it's better to just make the variable an atomic and remove the
unnecessary rwsem_atomic_update() "abstraction". I'll send out a
separate patch for this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ