[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231107215742.363031-53-ankur.a.arora@oracle.com>
Date: Tue, 7 Nov 2023 13:57:38 -0800
From: Ankur Arora <ankur.a.arora@...cle.com>
To: linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, peterz@...radead.org,
torvalds@...ux-foundation.org, paulmck@...nel.org,
linux-mm@...ck.org, x86@...nel.org, akpm@...ux-foundation.org,
luto@...nel.org, bp@...en8.de, dave.hansen@...ux.intel.com,
hpa@...or.com, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, willy@...radead.org, mgorman@...e.de,
jon.grimm@....com, bharata@....com, raghavendra.kt@....com,
boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
jgross@...e.com, andrew.cooper3@...rix.com, mingo@...nel.org,
bristot@...nel.org, mathieu.desnoyers@...icios.com,
geert@...ux-m68k.org, glaubitz@...sik.fu-berlin.de,
anton.ivanov@...bridgegreys.com, mattst88@...il.com,
krypton@...ich-teichert.org, rostedt@...dmis.org,
David.Laight@...LAB.COM, richard@....at, mjguzik@...il.com,
Ankur Arora <ankur.a.arora@...cle.com>
Subject: [RFC PATCH 52/86] sched: remove CONFIG_PREEMPTION from *_needbreak()
Since CONFIG_PREEMPTION is always enabled we can remove the clutter.
Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
---
include/linux/sched.h | 15 +++------------
1 file changed, 3 insertions(+), 12 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4dabd9530f98..6ba4371761c4 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2146,16 +2146,13 @@ static inline void cond_resched_rcu(void)
/*
* Does a critical section need to be broken due to another
- * task waiting?: (technically does not depend on CONFIG_PREEMPTION,
- * but a general need for low latency)
+ * task waiting?: this should really depend on whether we have
+ * sched_feat(FORCE_PREEMPT) or not but that is not visible
+ * outside the scheduler.
*/
static inline int spin_needbreak(spinlock_t *lock)
{
-#ifdef CONFIG_PREEMPTION
return spin_is_contended(lock);
-#else
- return 0;
-#endif
}
/*
@@ -2163,16 +2160,10 @@ static inline int spin_needbreak(spinlock_t *lock)
* Returns non-zero if there is another task waiting on the rwlock.
* Returns zero if the lock is not contended or the system / underlying
* rwlock implementation does not support contention detection.
- * Technically does not depend on CONFIG_PREEMPTION, but a general need
- * for low latency.
*/
static inline int rwlock_needbreak(rwlock_t *lock)
{
-#ifdef CONFIG_PREEMPTION
return rwlock_is_contended(lock);
-#else
- return 0;
-#endif
}
static __always_inline bool need_resched(void)
--
2.31.1
Powered by blists - more mailing lists