[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-78bff1c8684fb94f1ae7283688f90188b53fc433@git.kernel.org>
Date: Tue, 9 Dec 2014 02:17:36 -0800
From: tip-bot for Oleg Nesterov <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: paulmck@...ux.vnet.ibm.com, oleg@...hat.com, tglx@...utronix.de,
Waiman.Long@...com, torvalds@...ux-foundation.org,
peterz@...radead.org, hpa@...or.com, linux-kernel@...r.kernel.org,
mingo@...nel.org, jeremy@...p.org
Subject: [tip:core/locking] x86/ticketlock: Fix spin_unlock_wait()
livelock
Commit-ID: 78bff1c8684fb94f1ae7283688f90188b53fc433
Gitweb: http://git.kernel.org/tip/78bff1c8684fb94f1ae7283688f90188b53fc433
Author: Oleg Nesterov <oleg@...hat.com>
AuthorDate: Mon, 1 Dec 2014 22:34:17 +0100
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 8 Dec 2014 11:36:44 +0100
x86/ticketlock: Fix spin_unlock_wait() livelock
arch_spin_unlock_wait() looks very suboptimal, to the point I
think this is just wrong and can lead to livelock: if the lock
is heavily contended we can never see head == tail.
But we do not need to wait for arch_spin_is_locked() == F. If it
is locked we only need to wait until the current owner drops
this lock. So we could simply spin until old_head !=
lock->tickets.head in this case, but .head can overflow and thus
we can't check "unlocked" only once before the main loop.
Also, the "unlocked" check can ignore TICKET_SLOWPATH_FLAG bit.
Signed-off-by: Oleg Nesterov <oleg@...hat.com>
Acked-by: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: Paul E.McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <Waiman.Long@...com>
Link: http://lkml.kernel.org/r/20141201213417.GA5842@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/include/asm/spinlock.h | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index bf156de..abc34e9 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -184,8 +184,20 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
- while (arch_spin_is_locked(lock))
+ __ticket_t head = ACCESS_ONCE(lock->tickets.head);
+
+ for (;;) {
+ struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
+ /*
+ * We need to check "unlocked" in a loop, tmp.head == head
+ * can be false positive because of overflow.
+ */
+ if (tmp.head == (tmp.tail & ~TICKET_SLOWPATH_FLAG) ||
+ tmp.head != head)
+ break;
+
cpu_relax();
+ }
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists