[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <539F683A.2070103@hp.com>
Date: Mon, 16 Jun 2014 17:57:14 -0400
From: Waiman Long <waiman.long@...com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: tglx@...utronix.de, mingo@...nel.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
xen-devel@...ts.xenproject.org, kvm@...r.kernel.org,
paolo.bonzini@...il.com, konrad.wilk@...cle.com,
boris.ostrovsky@...cle.com, paulmck@...ux.vnet.ibm.com,
riel@...hat.com, torvalds@...ux-foundation.org,
raghavendra.kt@...ux.vnet.ibm.com, david.vrabel@...rix.com,
oleg@...hat.com, gleb@...hat.com, scott.norton@...com,
chegu_vinod@...com, Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
> When we detect a hypervisor (!paravirt, see later patches), revert to
> a simple test-and-set lock to avoid the horrors of queue preemption.
>
> Signed-off-by: Peter Zijlstra<peterz@...radead.org>
> ---
> arch/x86/include/asm/qspinlock.h | 14 ++++++++++++++
> include/asm-generic/qspinlock.h | 7 +++++++
> kernel/locking/qspinlock.c | 3 +++
> 3 files changed, 24 insertions(+)
>
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -1,6 +1,7 @@
> #ifndef _ASM_X86_QSPINLOCK_H
> #define _ASM_X86_QSPINLOCK_H
>
> +#include<asm/cpufeature.h>
> #include<asm-generic/qspinlock_types.h>
>
> #if !defined(CONFIG_X86_OOSTORE)&& !defined(CONFIG_X86_PPRO_FENCE)
> @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str
>
> #endif /* !CONFIG_X86_OOSTORE&& !CONFIG_X86_PPRO_FENCE */
>
> +#define virt_queue_spin_lock virt_queue_spin_lock
> +
> +static inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> + if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
> + return false;
> +
> + while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
> + cpu_relax();
> +
> + return true;
> +}
> +
> #include<asm-generic/qspinlock.h>
>
> #endif /* _ASM_X86_QSPINLOCK_H */
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u
> }
> #endif
>
> +#ifndef virt_queue_spin_lock
> +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock)
> +{
> + return false;
> +}
> +#endif
> +
> /*
> * Initializier
> */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp
>
> BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS));
>
> + if (virt_queue_spin_lock(lock))
> + return;
> +
> /*
> * wait for in-progress pending->locked hand-overs
> *
I just wonder if it is better to allow the kernel distributors to decide
if unfair lock should be the default for virtual guest. Anyway, I have
no objection to that myself.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists