[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53504C4E.8060800@hp.com>
Date: Thu, 17 Apr 2014 17:49:02 -0400
From: Waiman Long <waiman.long@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, linux-arch@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
xen-devel@...ts.xenproject.org, kvm@...r.kernel.org,
Paolo Bonzini <paolo.bonzini@...il.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Rik van Riel <riel@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
David Vrabel <david.vrabel@...rix.com>,
Oleg Nesterov <oleg@...hat.com>,
Gleb Natapov <gleb@...hat.com>,
Scott J Norton <scott.norton@...com>,
Chegu Vinod <chegu_vinod@...com>
Subject: Re: [PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
On 04/17/2014 11:58 AM, Peter Zijlstra wrote:
> On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote:
>> +static __always_inline void
>> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
>> +{
>> + struct __qspinlock *l = (void *)lock;
>> +
>> + ACCESS_ONCE(l->locked_pending) = 1;
>> +}
>> @@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>> * we're pending, wait for the owner to go away.
>> *
>> * *,1,1 -> *,1,0
>> + *
>> + * this wait loop must be a load-acquire such that we match the
>> + * store-release that clears the locked bit and create lock
>> + * sequentiality; this because not all try_clear_pending_set_locked()
>> + * implementations imply full barriers.
> You renamed the function referred in the above comment.
>
Sorry, will fix the comments.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists