[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250513142954.ZM5QSQNc@linutronix.de>
Date: Tue, 13 May 2025 16:29:54 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: linux-kernel@...r.kernel.org
Cc: Ben Segall <bsegall@...gle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Mel Gorman <mgorman@...e.de>, Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleinxer <tglx@...utronix.de>,
Valentin Schneider <vschneid@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: [RFC] sched: Remove a preempt-disable section in rt_mutex_setprio()
rt_mutex_setprio() has only one caller: rt_mutex_adjust_prio(). It
expects that task_struct::pi_lock and rt_mutex_base::wait_lock are held.
Both locks are raw_spinlock_t and are acquired with disabled interrupts.
Nevertheless rt_mutex_setprio() disables preemption while invoking
__balance_callbacks() and raw_spin_rq_unlock(). Even if the possible
balance callbacks unlock the rq they must not enable interrupts as I
doubt that they also unlock rt_mutex_base::wait_lock.
Therefore interrupts remain disabled and disabling preemption is not
needed.
Commit 4c9a4bc89a9cc ("sched: Allow balance callbacks for check_class_changed()")
adds a preempt-disable section to rt_mutex_setprio() and
__sched_setscheduler(). In __sched_setscheduler() the preemption is
disabled before rq is unlocked and interrupts enabled but I don't see
why it makes a difference in rt_mutex_setprio().
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
kernel/sched/core.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c81cf642dba05..1790304d2c5ae 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7274,14 +7274,10 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
check_class_changed(rq, p, prev_class, oldprio);
out_unlock:
- /* Avoid rq from going away on us: */
- preempt_disable();
rq_unpin_lock(rq, &rf);
__balance_callbacks(rq);
raw_spin_rq_unlock(rq);
-
- preempt_enable();
}
#endif
--
2.49.0
Powered by blists - more mailing lists