[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220621193641.609712-1-longman@redhat.com>
Date: Tue, 21 Jun 2022 15:36:41 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Juri Lelli <juri.lelli@...hat.com>,
Mike Stowell <mstowell@...hat.com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH v2] locking/rtmutex: Limit # of lock stealing for non-RT waiters
Commit 48eb3f4fcfd3 ("locking/rtmutex: Implement equal priority lock
stealing") allows unlimited number of lock stealing's for non-RT
tasks. That can lead to lock starvation of non-RT top waiter tasks if
there is a constant incoming stream of non-RT lockers. This can cause
task lockup in PREEMPT_RT kernel. For example,
[ 1249.921363] INFO: task systemd:2178 blocked for more than 622 seconds.
[ 1872.984225] INFO: task kworker/6:4:63401 blocked for more than 622 seconds.
Avoiding this problem and ensuring forward progress by limiting the
number of times that a lock can be stolen from each waiter. This patch
sets a threshold of 10. That number is arbitrary and can be changed
if needed.
With that change, the task lockups previously observed when running
stressful workloads on PREEMPT_RT kernel disappeared.
Fixes: 48eb3f4fcfd3 ("locking/rtmutex: Implement equal priority lock stealing")
Reported-by: Mike Stowell <mstowell@...hat.com>
Signed-off-by: Waiman Long <longman@...hat.com>
---
kernel/locking/rtmutex.c | 9 ++++++---
kernel/locking/rtmutex_common.h | 8 ++++++++
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 7779ee8abc2a..bdddb3dc36c2 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -359,10 +359,13 @@ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
if (rt_prio(waiter->prio) || dl_prio(waiter->prio))
return false;
- return rt_mutex_waiter_equal(waiter, top_waiter);
-#else
- return false;
+ if (rt_mutex_waiter_equal(waiter, top_waiter) &&
+ (top_waiter->nr_steals < RT_MUTEX_LOCK_STEAL_MAX)) {
+ top_waiter->nr_steals++;
+ return true;
+ }
#endif
+ return false;
}
#define __node_2_waiter(node) \
diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
index c47e8361bfb5..5858efe5cb0e 100644
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -26,6 +26,7 @@
* @task: task reference to the blocked task
* @lock: Pointer to the rt_mutex on which the waiter blocks
* @wake_state: Wakeup state to use (TASK_NORMAL or TASK_RTLOCK_WAIT)
+ * @nr_steals: Number of times the lock is stolen
* @prio: Priority of the waiter
* @deadline: Deadline of the waiter if applicable
* @ww_ctx: WW context pointer
@@ -36,11 +37,17 @@ struct rt_mutex_waiter {
struct task_struct *task;
struct rt_mutex_base *lock;
unsigned int wake_state;
+ unsigned int nr_steals;
int prio;
u64 deadline;
struct ww_acquire_ctx *ww_ctx;
};
+/*
+ * The maximum number of times where lock can be stolen per waiter.
+ */
+#define RT_MUTEX_LOCK_STEAL_MAX 10
+
/**
* rt_wake_q_head - Wrapper around regular wake_q_head to support
* "sleeping" spinlocks on RT
@@ -194,6 +201,7 @@ static inline void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter)
RB_CLEAR_NODE(&waiter->tree_entry);
waiter->wake_state = TASK_NORMAL;
waiter->task = NULL;
+ waiter->nr_steals = 0;
}
static inline void rt_mutex_init_rtlock_waiter(struct rt_mutex_waiter *waiter)
--
2.31.1
Powered by blists - more mailing lists