[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <561D6C85.9030508@hpe.com>
Date: Tue, 13 Oct 2015 16:41:41 -0400
From: Waiman Long <waiman.long@....com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
linux-kernel@...r.kernel.org,
Scott J Norton <scott.norton@....com>,
Douglas Hatch <doug.hatch@....com>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH v7 4/5] locking/pvqspinlock: Allow 1 lock stealing attempt
On 10/13/2015 02:23 PM, Peter Zijlstra wrote:
> On Tue, Sep 22, 2015 at 04:50:43PM -0400, Waiman Long wrote:
>> for (;; waitcnt++) {
>> + loop = SPIN_THRESHOLD;
>> + while (loop) {
>> + /*
>> + * Spin until the lock is free
>> + */
>> + for (; loop&& READ_ONCE(l->locked); loop--)
>> + cpu_relax();
>> + /*
>> + * Seeing the lock is free, this queue head vCPU is
>> + * the rightful next owner of the lock. However, the
>> + * lock may have just been stolen by another task which
>> + * has entered the slowpath. So we need to use atomic
>> + * operation to make sure that we really get the lock.
>> + * Otherwise, we have to wait again.
>> + */
>> + if (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0)
>> + goto gotlock;
>> }
> for (loop = SPIN_THRESHOLD; loop; --loop) {
> if (!READ_ONCE(l->locked)&&
> cmpxchg(&l->locked, 0, _Q_LOCKED_VA) == 0)
> goto gotlock;
>
> cpu_relax();
> }
>
This was the code that I used in my original patch, but it seems to
confuse you about doing too many lock stealing. So I separated it out to
make my intention more explicit. I will change it back to the old code.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists