[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <39551c1e0ae3e4963ad9ac119edf78dbd25a32d9.1314207974.git.jeremy.fitzhardinge@citrix.com>
Date: Wed, 24 Aug 2011 10:52:57 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
the arch/x86 maintainers <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Nick Piggin <npiggin@...nel.dk>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
Subject: [PATCH 03/18] x86/ticketlock: Use C for __ticket_spin_unlock
From: Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
If we don't need to use a locked inc for unlock, then implement it in C.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>
---
arch/x86/include/asm/spinlock.h | 33 ++++++++++++++++++---------------
1 files changed, 18 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f48a6e3..61feeb5 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -33,9 +33,21 @@
* On PPro SMP or if we are using OOSTORE, we use a locked operation to unlock
* (PPro errata 66, 92)
*/
-# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
+{
+ if (sizeof(lock->tickets.head) == sizeof(u8))
+ asm volatile (LOCK_PREFIX "incb %0"
+ : "+m" (lock->tickets.head) : : "memory");
+ else
+ asm volatile (LOCK_PREFIX "incw %0"
+ : "+m" (lock->tickets.head) : : "memory");
+
+}
#else
-# define UNLOCK_LOCK_PREFIX
+static __always_inline void __ticket_unlock_release(struct arch_spinlock *lock)
+{
+ lock->tickets.head++;
+}
#endif
/*
@@ -93,14 +105,6 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
return tmp;
}
-
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
-{
- asm volatile(UNLOCK_LOCK_PREFIX "incb %0"
- : "+m" (lock->slock)
- :
- : "memory", "cc");
-}
#else
static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
{
@@ -144,15 +148,14 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
return tmp;
}
+#endif
static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
{
- asm volatile(UNLOCK_LOCK_PREFIX "incw %0"
- : "+m" (lock->slock)
- :
- : "memory", "cc");
+ barrier(); /* prevent reordering out of locked region */
+ __ticket_unlock_release(lock);
+ barrier(); /* prevent reordering into locked region */
}
-#endif
static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
{
--
1.7.6
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists