[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-75f93fed50c2abadbab6ef546b265f51ca975b27@git.kernel.org>
Date: Sat, 28 Sep 2013 01:28:30 -0700
From: tip-bot for Peter Zijlstra <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...nel.org,
torvalds@...ux-foundation.org, peterz@...radead.org,
ying.huang@...el.com, yuanhan.liu@...ux.intel.com,
tglx@...utronix.de, fengguang.wu@...el.com
Subject: [tip:sched/core] sched: Revert need_resched()
to look at TIF_NEED_RESCHED
Commit-ID: 75f93fed50c2abadbab6ef546b265f51ca975b27
Gitweb: http://git.kernel.org/tip/75f93fed50c2abadbab6ef546b265f51ca975b27
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Fri, 27 Sep 2013 17:30:03 +0200
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Sat, 28 Sep 2013 10:04:47 +0200
sched: Revert need_resched() to look at TIF_NEED_RESCHED
Yuanhan reported a serious throughput regression in his pigz
benchmark. Using the ftrace patch I found that several idle
paths need more TLC before we can switch the generic
need_resched() over to preempt_need_resched.
The preemption paths benefit most from preempt_need_resched and
do indeed use it; all other need_resched() users don't really
care that much so reverting need_resched() back to
tif_need_resched() is the simple and safe solution.
Reported-by: Yuanhan Liu <yuanhan.liu@...ux.intel.com>
Signed-off-by: Peter Zijlstra <peterz@...radead.org>
Cc: Fengguang Wu <fengguang.wu@...el.com>
Cc: Huang Ying <ying.huang@...el.com>
Cc: lkp@...ux.intel.com
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Link: http://lkml.kernel.org/r/20130927153003.GF15690@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/include/asm/preempt.h | 8 --------
include/asm-generic/preempt.h | 8 --------
include/linux/sched.h | 5 +++++
3 files changed, 5 insertions(+), 16 deletions(-)
diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 1de41690..8729723 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -80,14 +80,6 @@ static __always_inline bool __preempt_count_dec_and_test(void)
}
/*
- * Returns true when we need to resched -- even if we can not.
- */
-static __always_inline bool need_resched(void)
-{
- return unlikely(test_preempt_need_resched());
-}
-
-/*
* Returns true when we need to resched and can (barring IRQ state).
*/
static __always_inline bool should_resched(void)
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index 5dc14ed..ddf2b42 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -85,14 +85,6 @@ static __always_inline bool __preempt_count_dec_and_test(void)
}
/*
- * Returns true when we need to resched -- even if we can not.
- */
-static __always_inline bool need_resched(void)
-{
- return unlikely(test_preempt_need_resched());
-}
-
-/*
* Returns true when we need to resched and can (barring IRQ state).
*/
static __always_inline bool should_resched(void)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index b09798b..2ac5285 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2577,6 +2577,11 @@ static inline bool __must_check current_clr_polling_and_test(void)
}
#endif
+static __always_inline bool need_resched(void)
+{
+ return unlikely(tif_need_resched());
+}
+
/*
* Thread group CPU time accounting.
*/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists