[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-e26d555f695ac8a3aa38055dd04bd23c1334723b@git.kernel.org>
Date: Tue, 29 Sep 2015 03:30:58 -0700
From: tip-bot for Peter Zijlstra <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: hpa@...or.com, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org, peterz@...radead.org,
mingo@...nel.org, efault@....de, tglx@...utronix.de
Subject: [tip:sched/core] sched/core: Create preempt_count invariant
Commit-ID: e26d555f695ac8a3aa38055dd04bd23c1334723b
Gitweb: http://git.kernel.org/tip/e26d555f695ac8a3aa38055dd04bd23c1334723b
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Tue, 29 Sep 2015 11:28:27 +0200
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 29 Sep 2015 12:27:40 +0200
sched/core: Create preempt_count invariant
Ensure that upon scheduling preempt_count == 2; although
currently an additional PREEMPT_ACTIVE is still possible.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: fweisbec@...il.com
Cc: linux-kernel@...r.kernel.org
Cc: oleg@...hat.com
Cc: rostedt@...dmis.org
Cc: umgwanakikbuti@...il.com
Link: http://lkml.kernel.org/r/20150929093519.817299442@infradead.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/include/asm/preempt.h | 3 ++-
include/asm-generic/preempt.h | 2 +-
kernel/sched/core.c | 14 ++++++++++----
3 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index b12f810..183d95c6 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -31,7 +31,8 @@ static __always_inline void preempt_count_set(int pc)
* must be macros to avoid header recursion hell
*/
#define init_task_preempt_count(p) do { \
- task_thread_info(p)->saved_preempt_count = PREEMPT_DISABLED; \
+ task_thread_info(p)->saved_preempt_count = \
+ 2*PREEMPT_DISABLE_OFFSET + PREEMPT_NEED_RESCHED; \
} while (0)
#define init_idle_preempt_count(p, cpu) do { \
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index 0bec580..1d6f104 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -24,7 +24,7 @@ static __always_inline void preempt_count_set(int pc)
* must be macros to avoid header recursion hell
*/
#define init_task_preempt_count(p) do { \
- task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
+ task_thread_info(p)->preempt_count = 2*PREEMPT_DISABLED; \
} while (0)
#define init_idle_preempt_count(p, cpu) do { \
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a91df61..ecd585c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2588,11 +2588,17 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev)
{
struct rq *rq;
- /* finish_task_switch() drops rq->lock and enables preemtion */
- preempt_disable();
- rq = finish_task_switch(prev);
+ /*
+ * Still have preempt_count() == 2, from:
+ *
+ * schedule()
+ * preempt_disable(); // 1
+ * __schedule()
+ * raw_spin_lock_irq(&rq->lock) // 2
+ */
+ rq = finish_task_switch(prev); /* drops rq->lock, preempt_count() == 1 */
balance_callback(rq);
- preempt_enable();
+ preempt_enable(); /* preempt_count() == 0 */
if (current->set_child_tid)
put_user(task_pid_vnr(current), current->set_child_tid);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists