lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1401990873.13877.34.camel@buesod1.americas.hpqcorp.net>
Date:	Thu, 05 Jun 2014 10:54:33 -0700
From:	Davidlohr Bueso <davidlohr@...com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Andev <debiandev@...il.com>, Pranith Kumar <pranith@...ech.edu>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 1/1] remove redundant compare, cmpxchg already does
 it

On Thu, 2014-06-05 at 09:22 +0200, Peter Zijlstra wrote:
> On Wed, Jun 04, 2014 at 04:56:50PM -0400, Andev wrote:
> > On Wed, Jun 4, 2014 at 4:38 PM, Pranith Kumar <pranith@...ech.edu> wrote:
> > > remove a redundant comparision
> > >
> > > Signed-off-by: Pranith Kumar <bobby.prani@...il.com>
> > > ---
> > >  kernel/locking/rwsem-xadd.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> > >
> > > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> > > index 1f99664b..6f8bd3c 100644
> > > --- a/kernel/locking/rwsem-xadd.c
> > > +++ b/kernel/locking/rwsem-xadd.c
> > > @@ -249,8 +249,7 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
> > >  {
> > >      if (!(count & RWSEM_ACTIVE_MASK)) {
> > >          /* try acquiring the write lock */
> > > -        if (sem->count == RWSEM_WAITING_BIAS &&
> > > -            cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
> > > +        if (cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
> > >                  RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
> > 
> > This was mainly done to avoid the cost of a cmpxchg in case where they
> > are not equal. Not sure if it really makes a difference though.
> 
> It does, a cache hot cmpxchg instruction is 24 cycles (as is pretty much
> any other LOCKed ins, as measured on my WSM-EP), not to mention that
> cmpxchg is a RMW so it needs to grab the cacheline in exclusive mode.
> 
> A read, which allows the cacheline to remain in shared, and non LOCKed
> ops are way faster.

Yep, and we also do it in mutexes. The numbers and benefits on larger
systems speaks for themselves. It would, perhaps, be worth adding a
comment as it does seem redundant if you're not thinking about the
cacheline when reading the code.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ