[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150907000546.GA27993@linux-q0g1.site>
Date: Sun, 6 Sep 2015 17:05:46 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Dave Chinner <david@...morbit.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Waiman Long <Waiman.Long@...com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [4.2, Regression] Queued spinlocks cause major XFS performance
regression
On Fri, 04 Sep 2015, Peter Zijlstra wrote:
>-static inline bool virt_queued_spin_lock(struct qspinlock *lock)
>+static inline bool virt_spin_lock(struct qspinlock *lock)
Given that we fall back to the cmpxchg loop even when PARAVIRT is not in the
picture, I believe this function is horribly misnamed.
> {
> if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> return false;
>
>- while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
>- cpu_relax();
>+ /*
>+ * On hypervisors without PARAVIRT_SPINLOCKS support we fall
>+ * back to a Test-and-Set spinlock, because fair locks have
>+ * horrible lock 'holder' preemption issues.
>+ */
>+
This comment is also misleading... but if you tuck the whole function
under some PARAVIRT option, it obviously makes sense to just leave as is.
And let native actually _use_ qspinlocks.
>+ do {
>+ while (atomic_read(&lock->val) != 0)
>+ cpu_relax();
>+ } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0);
CCAS to the rescue again.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists