[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55F71489.2010605@hpe.com>
Date: Mon, 14 Sep 2015 14:40:09 -0400
From: Waiman Long <waiman.long@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Davidlohr Bueso <dave@...olabs.net>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v6 1/6] locking/qspinlock: relaxes cmpxchg & xchg ops
in native code
On 09/14/2015 08:06 AM, Peter Zijlstra wrote:
> On Fri, Sep 11, 2015 at 03:27:44PM -0700, Davidlohr Bueso wrote:
>> On Fri, 11 Sep 2015, Waiman Long wrote:
>>
>>> @@ -46,7 +46,7 @@ static inline bool virt_queued_spin_lock(struct qspinlock *lock)
>>> if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
>>> return false;
>>>
>>> - while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
>>> + while (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) != 0)
>>> cpu_relax();
>> This code has changed with Peter's recent ccas fix. And the whole virt_queued_spin_lock()
>> thing will now be under pv configs. So this doesn't apply to native code anymore, so it
>> looks like it should be dropped altogether.
> Yeah, it also doesn't make sense, this ix x86 arch code, x86 cannot do
> cmpxchg_acquire. Then again, I suppose we could argue its of
> documentation value..
Yes, it is to be consistent with the change in asm_generic qspinlock.h.
We can certainly skip that.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists