[<prev] [next>] [day] [month] [year] [list]
Message-ID:
<ME0P282MB48902DA6F350513DBD1BE902CCD62@ME0P282MB4890.AUSP282.PROD.OUTLOOK.COM>
Date: Wed, 26 Jun 2024 19:01:53 +0800
From: Roland Xu <mu001999@...look.com>
To: peterz@...radead.org,
mingo@...hat.com,
will@...nel.org
Cc: linux-kernel@...r.kernel.org,
Roland Xu <mu001999@...look.com>
Subject: [PATCH] Avoid schedule while atomic if meeting the early deadlock.
The wait_lock is held in rt_mutex_handle_deadlock,
so unlock_irq it if rtmutex deadlock is detected.
Otherwise, this would trigger scheduling while atomic.
Signed-off-by: Roland Xu <mu001999@...look.com>
---
kernel/locking/rtmutex.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 88d08eeb8bc0..9188bfb63cb6 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1644,6 +1644,7 @@ static int __sched rt_mutex_slowlock_block(struct rt_mutex_base *lock,
}
static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
+ struct rt_mutex_base *lock,
struct rt_mutex_waiter *w)
{
/*
@@ -1660,6 +1661,7 @@ static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
* Yell loudly and stop the task right here.
*/
WARN(1, "rtmutex deadlock detected\n");
+ raw_spin_unlock_irq(&lock->wait_lock);
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
rt_mutex_schedule();
@@ -1713,7 +1715,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
} else {
__set_current_state(TASK_RUNNING);
remove_waiter(lock, waiter);
- rt_mutex_handle_deadlock(ret, chwalk, waiter);
+ rt_mutex_handle_deadlock(ret, chwalk, lock, waiter);
}
/*
--
2.34.1
Powered by blists - more mailing lists