[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140605072248.GE3213@twins.programming.kicks-ass.net>
Date: Thu, 5 Jun 2014 09:22:48 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andev <debiandev@...il.com>
Cc: Pranith Kumar <pranith@...ech.edu>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 1/1] remove redundant compare, cmpxchg already does it
On Wed, Jun 04, 2014 at 04:56:50PM -0400, Andev wrote:
> On Wed, Jun 4, 2014 at 4:38 PM, Pranith Kumar <pranith@...ech.edu> wrote:
> > remove a redundant comparision
> >
> > Signed-off-by: Pranith Kumar <bobby.prani@...il.com>
> > ---
> > kernel/locking/rwsem-xadd.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> > index 1f99664b..6f8bd3c 100644
> > --- a/kernel/locking/rwsem-xadd.c
> > +++ b/kernel/locking/rwsem-xadd.c
> > @@ -249,8 +249,7 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
> > {
> > if (!(count & RWSEM_ACTIVE_MASK)) {
> > /* try acquiring the write lock */
> > - if (sem->count == RWSEM_WAITING_BIAS &&
> > - cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
> > + if (cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
> > RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
>
> This was mainly done to avoid the cost of a cmpxchg in case where they
> are not equal. Not sure if it really makes a difference though.
It does, a cache hot cmpxchg instruction is 24 cycles (as is pretty much
any other LOCKed ins, as measured on my WSM-EP), not to mention that
cmpxchg is a RMW so it needs to grab the cacheline in exclusive mode.
A read, which allows the cacheline to remain in shared, and non LOCKed
ops are way faster.
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists