[<prev] [next>] [day] [month] [year] [list]
Message-ID: <160226288549.7002.94618164546835622.tip-bot2@tip-bot2>
Date: Fri, 09 Oct 2020 17:01:25 -0000
From: "tip-bot2 for Thomas Gleixner" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
"Paul E. McKenney" <paulmck@...nel.org>, x86 <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: [tip: core/rcu] sched: Cleanup PREEMPT_COUNT leftovers
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 4a291f57d97ce1c14d286b2451573ccbb3b43022
Gitweb: https://git.kernel.org/tip/4a291f57d97ce1c14d286b2451573ccbb3b43022
Author: Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Mon, 14 Sep 2020 19:30:49 +02:00
Committer: Paul E. McKenney <paulmck@...nel.org>
CommitterDate: Mon, 28 Sep 2020 16:03:21 -07:00
sched: Cleanup PREEMPT_COUNT leftovers
CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Ben Segall <bsegall@...gle.com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Daniel Bristot de Oliveira <bristot@...hat.com>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
kernel/sched/core.c | 6 +-----
lib/Kconfig.debug | 1 -
2 files changed, 1 insertion(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d95dc3..1c304a1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3706,8 +3706,7 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev)
* finish_task_switch() for details.
*
* finish_task_switch() will drop rq->lock() and lower preempt_count
- * and the preempt_enable() will end up enabling preemption (on
- * PREEMPT_COUNT kernels).
+ * and the preempt_enable() will end up enabling preemption.
*/
rq = finish_task_switch(prev);
@@ -7308,9 +7307,6 @@ void __cant_sleep(const char *file, int line, int preempt_offset)
if (irqs_disabled())
return;
- if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
- return;
-
if (preempt_count() > preempt_offset)
return;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d4d0574..52af6ad 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1320,7 +1320,6 @@ config DEBUG_LOCKDEP
config DEBUG_ATOMIC_SLEEP
bool "Sleep inside atomic section checking"
- select PREEMPT_COUNT
depends on DEBUG_KERNEL
help
If you say Y here, various routines which may sleep will become very
Powered by blists - more mailing lists