[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-4009f4b3a9d8b74547269f293e6a920adf278996@git.kernel.org>
Date: Mon, 30 Jan 2017 03:53:18 -0800
From: "tip-bot for Steven Rostedt (VMware)" <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: akpm@...ux-foundation.org, hpa@...or.com,
linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
mingo@...nel.org, tglx@...utronix.de, rostedt@...dmis.org,
peterz@...radead.org
Subject: [tip:locking/core] locking/rtmutex: Flip unlikely() branch to
likely() in __rt_mutex_slowlock()
Commit-ID: 4009f4b3a9d8b74547269f293e6a920adf278996
Gitweb: http://git.kernel.org/tip/4009f4b3a9d8b74547269f293e6a920adf278996
Author: Steven Rostedt (VMware) <rostedt@...dmis.org>
AuthorDate: Thu, 19 Jan 2017 11:32:34 -0500
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Mon, 30 Jan 2017 11:42:59 +0100
locking/rtmutex: Flip unlikely() branch to likely() in __rt_mutex_slowlock()
Running my likely/unlikely profiler for 3 weeks on two production
machines, I discovered that the unlikely() test in
__rt_mutex_slowlock() checking if state is TASK_INTERRUPTIBLE is hit
100% of the time, making it a very likely case.
The reason is, on a vanilla kernel, the majority case of calling
rt_mutex() is from the futex code. This code is always called as
TASK_INTERRUPTIBLE. In the -rt patch, this code is commonly called when
PREEMPT_RT is enabled with TASK_UNINTERRUPTIBLE. But that's not the
likely scenario.
The rt_mutex() code should be optimized for the common vanilla case,
and that is from a futex, with TASK_INTERRUPTIBLE as the state.
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/20170119113234.1efeedd1@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/locking/rtmutex.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 2f443ed..d340be3 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1179,7 +1179,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
* TASK_INTERRUPTIBLE checks for signals and
* timeout. Ignored otherwise.
*/
- if (unlikely(state == TASK_INTERRUPTIBLE)) {
+ if (likely(state == TASK_INTERRUPTIBLE)) {
/* Signal pending? */
if (signal_pending(current))
ret = -EINTR;
Powered by blists - more mailing lists