[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5eee7cc1408aa148ee7401e0216793865b3a73ef.1764648076.git.wen.yang@linux.dev>
Date: Tue, 2 Dec 2025 13:51:18 +0800
From: wen.yang@...ux.dev
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>
Cc: Wen Yang <wen.yang@...ux.dev>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 1/2] sched/debug: add explicit TASK_RTLOCK_WAIT printing
From: Wen Yang <wen.yang@...ux.dev>
A priority inversion scenario can occur when a CFS task is starved
due to RT throttling. The scenario is as follows:
0. An rtmutex (e.g., softirq_ctrl.lock) is contended by both CFS
tasks (e.g., ksoftirqd) and RT tasks (e.g., ktimer).
1. An RT task 'A' (e.g., ktimer) acquired the rtmutex.
2. A CFS task 'B' (e.g., ksoftirqd) attempts to acquire the same
rtmutex and blocks.
3. A higher-priority RT task 'C' (e.g., stress-ng) runs for an
extended period, preempting task 'A' and causing the RT runqueue
to be throttled.
4. Once rt throttled, CFS task 'B' should run, but it remains blocked
because the lock is still held by the non-running RT task 'A'. This
can even lead to the CPU going idle.
5. When the rt throttle period ends, the high-priority RT task 'C'
resumes execution, and the cycle repeats, leading to indefinite
starvation of CFS task 'B'.
A typical stack trace for the blocked ksoftirqd shows it in a 'D'
(TASK_RTLOCK_WAIT) state, waiting on the lock:
ksoftirqd/5-61 [005] d...211 58212.064160: sched_switch: prev_comm=ksoftirqd/5 prev_pid=61 prev_prio=120 prev_state=D ==> next_comm=swapper/5 next_pid=0 next_prio=120
ksoftirqd/5-61 [005] d...211 58212.064161: <stack trace>
=> __schedule
=> schedule_rtlock
=> rtlock_slowlock_locked
=> rt_spin_lock
=> __local_bh_disable_ip
=> run_ksoftirqd
=> smpboot_thread_fn
=> kthread
=> ret_from_fork
This patch makes TASK_RTLOCK_WAIT a distinct state 'L' in task state reporting,
allowing user-space tools (e.g., stalld) to detect blocked tasks in this state
and potentially boost the lock holder or adjust the priority of the blocked
CFS task to resolve the inversion.
This requires shuffling the state bits to fit into TASK_REPORT mask.
Signed-off-by: Wen Yang <wen.yang@...ux.dev>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...hat.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Ben Segall <bsegall@...gle.com>
Cc: Mel Gorman <mgorman@...e.de>
Cc: Valentin Schneider <vschneid@...hat.com>
Cc: linux-kernel@...r.kernel.org
---
fs/proc/array.c | 3 ++-
include/linux/sched.h | 21 +++++++++------------
include/trace/events/sched.h | 1 +
3 files changed, 12 insertions(+), 13 deletions(-)
diff --git a/fs/proc/array.c b/fs/proc/array.c
index cbd4bc4a58e4..a9b7e5a920c1 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -134,9 +134,10 @@ static const char * const task_state_array[] = {
"X (dead)", /* 0x10 */
"Z (zombie)", /* 0x20 */
"P (parked)", /* 0x40 */
+ "L (rtlock wait)", /* 0x80 */
/* states beyond TASK_REPORT: */
- "I (idle)", /* 0x80 */
+ "I (idle)", /* 0x100 */
};
static inline const char *get_task_state(struct task_struct *tsk)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index d395f2810fac..455e41aa073f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -113,12 +113,12 @@ struct user_event_mm;
#define EXIT_TRACE (EXIT_ZOMBIE | EXIT_DEAD)
/* Used in tsk->__state again: */
#define TASK_PARKED 0x00000040
-#define TASK_DEAD 0x00000080
-#define TASK_WAKEKILL 0x00000100
-#define TASK_WAKING 0x00000200
-#define TASK_NOLOAD 0x00000400
-#define TASK_NEW 0x00000800
-#define TASK_RTLOCK_WAIT 0x00001000
+#define TASK_RTLOCK_WAIT 0x00000080
+#define TASK_DEAD 0x00000100
+#define TASK_WAKEKILL 0x00000200
+#define TASK_WAKING 0x00000400
+#define TASK_NOLOAD 0x00000800
+#define TASK_NEW 0x00001000
#define TASK_FREEZABLE 0x00002000
#define __TASK_FREEZABLE_UNSAFE (0x00004000 * IS_ENABLED(CONFIG_LOCKDEP))
#define TASK_FROZEN 0x00008000
@@ -145,7 +145,7 @@ struct user_event_mm;
#define TASK_REPORT (TASK_RUNNING | TASK_INTERRUPTIBLE | \
TASK_UNINTERRUPTIBLE | __TASK_STOPPED | \
__TASK_TRACED | EXIT_DEAD | EXIT_ZOMBIE | \
- TASK_PARKED)
+ TASK_PARKED | TASK_RTLOCK_WAIT)
#define task_is_running(task) (READ_ONCE((task)->__state) == TASK_RUNNING)
@@ -1672,12 +1672,9 @@ static inline unsigned int __task_state_index(unsigned int tsk_state,
state = TASK_REPORT_IDLE;
/*
- * We're lying here, but rather than expose a completely new task state
- * to userspace, we can make this appear as if the task has gone through
- * a regular rt_mutex_lock() call.
* Report frozen tasks as uninterruptible.
*/
- if ((tsk_state & TASK_RTLOCK_WAIT) || (tsk_state & TASK_FROZEN))
+ if ((tsk_state & TASK_FROZEN))
state = TASK_UNINTERRUPTIBLE;
return fls(state);
@@ -1690,7 +1687,7 @@ static inline unsigned int task_state_index(struct task_struct *tsk)
static inline char task_index_to_char(unsigned int state)
{
- static const char state_char[] = "RSDTtXZPI";
+ static const char state_char[] = "RSDTtXZPLI";
BUILD_BUG_ON(TASK_REPORT_MAX * 2 != 1 << (sizeof(state_char) - 1));
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 7b2645b50e78..2e22bb74900a 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -259,6 +259,7 @@ TRACE_EVENT(sched_switch,
{ EXIT_DEAD, "X" },
{ EXIT_ZOMBIE, "Z" },
{ TASK_PARKED, "P" },
+ { TASK_RTLOCK_WAIT, "L" },
{ TASK_DEAD, "I" }) :
"R",
--
2.25.1
Powered by blists - more mailing lists