[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210713160746.410991567@linutronix.de>
Date: Tue, 13 Jul 2021 17:10:59 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Davidlohr Bueso <dave@...olabs.net>
Subject: [patch 05/50] sched: Provide schedule point for RT locks
From: Thomas Gleixner <tglx@...utronix.de>
RT enabled kernels substitute spin/rwlocks with 'sleeping' variants based
on rtmutex. Blocking on such a lock is similar to preemption versus:
- I/O scheduling and worker handling because these functions might block
on another substituted lock or come from a lock contention within these
functions.
- RCU considers this like a preemption because the task might be in a read
side critical section.
Add a seperate scheduling point for this and hand a new scheduling mode
argument to __schedule() which allows along with seperate mode masks to
handle this gracefully from within the scheduler without proliferating that
to other subsystems like RCU.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
include/linux/sched.h | 3 +++
kernel/sched/core.c | 22 ++++++++++++++++++++--
2 files changed, 23 insertions(+), 2 deletions(-)
---
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -294,6 +294,9 @@ extern long schedule_timeout_idle(long t
asmlinkage void schedule(void);
extern void schedule_preempt_disabled(void);
asmlinkage void preempt_schedule_irq(void);
+#ifdef CONFIG_PREEMPT_RT
+ extern void schedule_rtlock(void);
+#endif
extern int __must_check io_schedule_prepare(void);
extern void io_schedule_finish(int token);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5832,8 +5832,14 @@ pick_next_task(struct rq *rq, struct tas
*/
#define SM_NONE 0x0
#define SM_PREEMPT 0x1
-#define SM_MASK_PREEMPT UINT_MAX
-#define SM_MASK_STATE SM_MASK_PREEMPT
+#ifndef CONFIG_PREEMPT_RT
+# define SM_MASK_PREEMPT UINT_MAX
+# define SM_MASK_STATE SM_MASK_PREEMPT
+#else
+# define SM_RTLOCK_WAIT 0x2
+# define SM_MASK_PREEMPT SM_PREEMPT
+# define SM_MASK_STATE (SM_PREEMPT | SM_RTLOCK_WAIT)
+#endif
/*
* __schedule() is the main scheduler function.
@@ -6138,6 +6144,18 @@ void __sched schedule_preempt_disabled(v
preempt_disable();
}
+#ifdef CONFIG_PREEMPT_RT
+void __sched notrace schedule_rtlock(void)
+{
+ do {
+ preempt_disable();
+ __schedule(SM_RTLOCK_WAIT);
+ sched_preempt_enable_no_resched();
+ } while (need_resched());
+}
+NOKPROBE_SYMBOL(schedule_rtlock);
+#endif
+
static void __sched notrace preempt_schedule_common(void)
{
do {
Powered by blists - more mailing lists