[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-07d2413a61db6500f58e614e873eed79d7f2ed72@git.kernel.org>
Date: Wed, 18 Feb 2015 09:10:46 -0800
From: tip-bot for Jason Low <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
paulmck@...ux.vnet.ibm.com, aswin@...com, jason.low2@...com,
mingo@...nel.org, peterz@...radead.org, tglx@...utronix.de,
dave@...olabs.net, tim.c.chen@...ux.intel.com, hpa@...or.com
Subject: [tip:locking/core] locking/mutex: In mutex_spin_on_owner(),
return true when owner changes
Commit-ID: 07d2413a61db6500f58e614e873eed79d7f2ed72
Gitweb: http://git.kernel.org/tip/07d2413a61db6500f58e614e873eed79d7f2ed72
Author: Jason Low <jason.low2@...com>
AuthorDate: Mon, 2 Feb 2015 13:59:26 -0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 18 Feb 2015 16:57:07 +0100
locking/mutex: In mutex_spin_on_owner(), return true when owner changes
In the mutex_spin_on_owner(), we return true only if lock->owner == NULL.
This was beneficial in situations where there were multiple threads
simultaneously spinning for the mutex. If another thread got the lock
while other spinner(s) were also doing mutex_spin_on_owner(), then the
other spinners would stop spinning. This workaround helped reduce the
chance that many spinners were simultaneously spinning for the mutex
which can help reduce contention in highly contended cases.
However, recent changes were made to the optimistic spinning code such
that instead of having all spinners simultaneously spin for the mutex,
we queue the spinners with an MCS lock such that only one thread spins
for the mutex at a time. Furthermore, the OSQ optimizations ensure that
spinners in the queue will stop waiting if it needs to reschedule.
Now, we don't have to worry about multiple threads spinning on owner
at the same time, and if lock->owner is not NULL at this point, it likely
means another thread happens to obtain the lock in the fastpath. In this
case, it would make sense for the spinner to continue spinning as long
as the spinner doesn't need to schedule and the mutex owner is running.
This patch changes this so that mutex_spin_on_owner() returns true when
the lock owner changes, which means a thread will only stop spinning
if it either needs to reschedule or if the lock owner is not running.
We saw up to a 5% performance improvement in the fserver workload with
this patch.
Signed-off-by: Jason Low <jason.low2@...com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Davidlohr Bueso <dave@...olabs.net>
Cc: Aswin Chandramouleeswaran <aswin@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: chegu_vinod@...com
Cc: tglx@...utronix.de
Link: http://lkml.kernel.org/r/1422914367-5574-2-git-send-email-jason.low2@hp.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/locking/mutex.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 94674e5..49cce44 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -250,11 +250,11 @@ int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
rcu_read_unlock();
/*
- * We break out the loop above on need_resched() and when the
- * owner changed, which is a sign for heavy contention. Return
- * success only when lock->owner is NULL.
+ * We break out of the loop above on either need_resched(), when
+ * the owner is not running, or when the lock owner changed.
+ * Return success only when the lock owner changed.
*/
- return lock->owner == NULL;
+ return lock->owner != owner;
}
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists