lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 5 Jun 2014 23:09:29 -0400
From:	Pranith Kumar <pranith@...ech.edu>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Andev <debiandev@...il.com>, LKML <linux-kernel@...r.kernel.org>,
	davidlohr@...com, jason.low2@...com
Subject: Re: [RFC PATCH 1/1] remove redundant compare, cmpxchg already does it

On Thu, Jun 5, 2014 at 3:22 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Wed, Jun 04, 2014 at 04:56:50PM -0400, Andev wrote:
>> On Wed, Jun 4, 2014 at 4:38 PM, Pranith Kumar <pranith@...ech.edu> wrote:
>> > remove a redundant comparision
>> >
>> > Signed-off-by: Pranith Kumar <bobby.prani@...il.com>
>> > ---
>> >  kernel/locking/rwsem-xadd.c | 3 +--
>> >  1 file changed, 1 insertion(+), 2 deletions(-)
>> >
>> > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
>> > index 1f99664b..6f8bd3c 100644
>> > --- a/kernel/locking/rwsem-xadd.c
>> > +++ b/kernel/locking/rwsem-xadd.c
>> > @@ -249,8 +249,7 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
>> >  {
>> >      if (!(count & RWSEM_ACTIVE_MASK)) {
>> >          /* try acquiring the write lock */
>> > -        if (sem->count == RWSEM_WAITING_BIAS &&
>> > -            cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
>> > +        if (cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
>> >                  RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
>>
>> This was mainly done to avoid the cost of a cmpxchg in case where they
>> are not equal. Not sure if it really makes a difference though.
>
> It does, a cache hot cmpxchg instruction is 24 cycles (as is pretty much
> any other LOCKed ins, as measured on my WSM-EP), not to mention that
> cmpxchg is a RMW so it needs to grab the cacheline in exclusive mode.
>
> A read, which allows the cacheline to remain in shared, and non LOCKed
> ops are way faster.


Ok, this means that we need to use more of such swaps on highly
contended paths. As Davidlohr suggested later on, I think it would be
a good idea to document this and add an API.

-- 
Pranith
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ