lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 16 Sep 2014 13:51:32 -0700 From: Tim Chen <tim.c.chen@...ux.intel.com> To: Peter Hurley <peter@...leysoftware.com> Cc: Jason Low <jason.low2@...com>, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>, Davidlohr Bueso <dbueso@...e.de>, linux-kernel@...r.kernel.org, Aswin Chandramouleeswaran <aswin@...com>, Chegu Vinod <chegu_vinod@...com> Subject: Re: [PATCH v2] locking/rwsem: Avoid double checking before try acquiring write lock On Tue, 2014-09-16 at 16:08 -0400, Peter Hurley wrote: > Hi Jason, > > On 09/16/2014 03:01 PM, Jason Low wrote: > > Commit 9b0fc9c09f1b checks for if there are known active lockers in > > order to avoid write trylocking using expensive cmpxchg() when it > > likely wouldn't get the lock. > > > > However, a subsequent patch was added such that we directly check for > > sem->count == RWSEM_WAITING_BIAS right before trying that cmpxchg(). > > Thus, commit 9b0fc9c09f1b now just adds extra overhead. This patch > > deletes it. > > It would be better to just not reload sem->count, and check the parameter > count == RWSEM_WAITING_BIAS instead. The count parameter is a very recent > load of sem->count (one of which is the latest exclusive read from an > atomic operation), so likely to be just as accurate as a reload of > sem->count without causing more cache line contention. > Agree with Peter. I think the extra check in the original code was to try to avoid reloading sem->count. So checking directly here (count == RWSEM_WAITING_BIAS) will accomplish that end. You'll need to modify your comment slightly to say Try acquiring the write lock. Check count first ... Thanks. Tim > Regards, > Peter Hurley > > > Also, add a comment on why we do an "extra check" of sem->count before > > the cmpxchg(). > > > > Signed-off-by: Jason Low <jason.low2@...com> > > --- > > kernel/locking/rwsem-xadd.c | 24 +++++++++++++----------- > > 1 files changed, 13 insertions(+), 11 deletions(-) > > > > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c > > index d6203fa..63d3ef2 100644 > > --- a/kernel/locking/rwsem-xadd.c > > +++ b/kernel/locking/rwsem-xadd.c > > @@ -247,18 +247,20 @@ struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem) > > return sem; > > } > > > > -static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) > > +static inline bool rwsem_try_write_lock(struct rw_semaphore *sem) > > { > > - if (!(count & RWSEM_ACTIVE_MASK)) { > > - /* try acquiring the write lock */ > > - if (sem->count == RWSEM_WAITING_BIAS && > > - cmpxchg(&sem->count, RWSEM_WAITING_BIAS, > > - RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) { > > - if (!list_is_singular(&sem->wait_list)) > > - rwsem_atomic_update(RWSEM_WAITING_BIAS, sem); > > - return true; > > - } > > + /* > > + * Try acquiring the write lock. Check sem->count first > > + * in order to reduce unnecessary expensive cmpxchg() operations. > > + */ > > + if (sem->count == RWSEM_WAITING_BIAS && > > + cmpxchg(&sem->count, RWSEM_WAITING_BIAS, > > + RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) { > > + if (!list_is_singular(&sem->wait_list)) > > + rwsem_atomic_update(RWSEM_WAITING_BIAS, sem); > > + return true; > > } > > + > > return false; > > } > > > > @@ -446,7 +448,7 @@ struct rw_semaphore __sched *rwsem_down_write_failed(struct rw_semaphore *sem) > > /* wait until we successfully acquire the lock */ > > set_current_state(TASK_UNINTERRUPTIBLE); > > while (true) { > > - if (rwsem_try_write_lock(count, sem)) > > + if (rwsem_try_write_lock(sem)) > > break; > > raw_spin_unlock_irq(&sem->wait_lock); > > > > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists