[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTi=ZdDCYiUoZJHkxRsKyBjtTUmtWqzJOusZGwafO@mail.gmail.com>
Date: Tue, 25 Jan 2011 12:13:52 +1100
From: Nick Piggin <npiggin@...il.com>
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
"the arch/x86 maintainers" <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Nick Piggin <npiggin@...nel.dk>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
Subject: Re: [PATCH 3/6] x86/ticketlock: Use C for __ticket_spin_unlock
On Tue, Jan 25, 2011 at 10:41 AM, Jeremy Fitzhardinge <jeremy@...p.org> wrote:
> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
>
> If we don't need to use a locked inc for unlock, then implement it in C.
>
> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
> ---
> arch/x86/include/asm/spinlock.h | 32 +++++++++++++++++---------------
> 1 files changed, 17 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
> index f48a6e3..0170ba9 100644
> --- a/arch/x86/include/asm/spinlock.h
> +++ b/arch/x86/include/asm/spinlock.h
> @@ -33,9 +33,21 @@
> * On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
> * (PPro errata 66, 92)
> */
> -# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> + if (sizeof(lock->tickets.head) == sizeof(u8))
> + asm (LOCK_PREFIX "incb %0"
> + : "+m" (lock->tickets.head) : : "memory");
> + else
> + asm (LOCK_PREFIX "incw %0"
> + : "+m" (lock->tickets.head) : : "memory");
> +
> +}
> #else
> -# define UNLOCK_LOCK_PREFIX
> +static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
> +{
> + lock->tickets.head++;
> +}
> #endif
>
> /*
> @@ -93,14 +105,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>
> return tmp;
> }
> -
> -static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
> -{
> - asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
> - : "+m" (lock->slock)
> - :
> - : "memory", "cc");
> -}
> #else
> static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> {
> @@ -144,15 +148,13 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
>
> return tmp;
> }
> +#endif
>
> static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
> {
> - asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
> - : "+m" (lock->slock)
> - :
> - : "memory", "cc");
> + __ticket_unlock_release(lock);
> + barrier(); /* prevent reordering into locked region */
> }
> -#endif
The barrier is wrong.
What makes me a tiny bit uneasy is that gcc is allowed to implement
this any way it wishes. OK there may be a NULL intersection of possible
valid assembly which is a buggy unlock... but relying on gcc to implement
lock primitives is scary. Does this really help in a way that can't be done
with the assembly versions?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists