[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55A7BABE.2070507@hp.com>
Date: Thu, 16 Jul 2015 10:07:58 -0400
From: Waiman Long <waiman.long@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>,
Davidlohr Bueso <dave@...olabs.net>, will.deacon@....com
Subject: Re: [PATCH v2 1/6] locking/pvqspinlock: Unconditional PV kick with
_Q_SLOW_VAL
On 07/16/2015 01:42 AM, Peter Zijlstra wrote:
> On Wed, Jul 15, 2015 at 08:18:23PM -0400, Waiman Long wrote:
>> On 07/15/2015 05:10 AM, Peter Zijlstra wrote:
>>> /*
>>> + * A failed cmpxchg doesn't provide any memory-ordering guarantees,
>>> + * so we need a barrier to order the read of the node data in
>>> + * pv_unhash *after* we've read the lock being _Q_SLOW_VAL.
>>> + *
>>> + * Matches the cmpxchg() in pv_wait_head() setting _Q_SLOW_VAL.
>>> + */
>>> + smp_rmb();
>> According to memory_barriers.txt, cmpxchg() is a full memory barrier. It
>> didn't say a failed cmpxchg will lose its memory guarantee. So is the
>> documentation right?
> The documentation is not entirely clear on this; but there are hints
> that this is so.
>
>> Or is that true for some architectures? I think it is
>> not true for x86.
> On x86 LOCK CMPXCHG is always a sync point, but yes there are archs for
> which a failed cmpxchg does _NOT_ provide any barrier semantics.
>
> The reason I started looking was because Will made Argh64 one of those.
That is what I suspected. In that case, I am fine with the patch as
smp_rmb() is an nop in x86 anyway.
Acked-by: Waiman Long <Waiman.Long@...com>
BTW, I think we also need to update the documentation to make it clear
that a failed cmpxchg() or atomic_cmpxchg() may not be a full memory
barrier as most people may not be aware of that.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists