[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230228224938.88035-1-brennanlamoreaux@gmail.com>
Date: Tue, 28 Feb 2023 14:49:38 -0800
From: "Brennan Lamoreaux (VMware)" <brennanlamoreaux@...il.com>
To: linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: blamoreaux@...are.com, frederic.martinsons@...il.com,
srivatsa@...il.mit.edu, vsirnapalli@...are.com,
amakhalov@...are.com, keerthanak@...are.com, ankitja@...are.com,
bordoloih@...are.com, srivatsab@...are.com,
"Brennan Lamoreaux (VMware)" <brennanlamoreaux@...il.com>,
Daniel Wagner <wagi@...om.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Tejun Heo <tj@...nel.org>
Subject: [PATCH 4.19-rt] workqueue: Fix deadlock due to recursive locking of pool->lock
Upstream commit d8bb65ab70f7 ("workqueue: Use rcuwait for wq_manager_wait")
replaced the waitqueue with rcuwait in the workqueue code. This change
involved removing the acquisition of pool->lock in put_unbound_pool(),
as it also adds the function wq_manager_inactive() which acquires this same
lock and is called one line later as a parameter to rcu_wait_event().
However, the backport of this commit in the PREEMPT_RT patchset
4.19.255-rt114 (patch 347) missed the removal of the acquisition of
pool->lock in put_unbound_pool(). This leads to a deadlock due to
recursive locking of pool->lock, as shown below in lockdep:
[ 252.083713] WARNING: possible recursive locking detected
[ 252.083718] 4.19.269-3.ph3-rt #1-photon Not tainted
[ 252.083721] --------------------------------------------
[ 252.083733] kworker/2:0/33 is trying to acquire lock:
[ 252.083747] 000000000b7b1ceb (&pool->lock/1){....}, at:
put_unbound_pool+0x10d/0x260
[ 252.083857]
but task is already holding lock:
[ 252.083860] 000000000b7b1ceb (&pool->lock/1){....}, at:
put_unbound_pool+0xbd/0x260
[ 252.083876]
other info that might help us debug this:
[ 252.083897] Possible unsafe locking scenario:
[ 252.083900] CPU0
[ 252.083903] ----
[ 252.083904] lock(&pool->lock/1);
[ 252.083911] lock(&pool->lock/1);
[ 252.083919]
*** DEADLOCK ***
[ 252.083921] May be due to missing lock nesting notation
Fix this deadlock by removing the pool->lock acquisition in
put_unbound_pool().
Signed-off-by: Brennan Lamoreaux (VMware) <brennanlamoreaux@...il.com>
Cc: Daniel Wagner <wagi@...om.org>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Tejun Heo <tj@...nel.org>
---
kernel/workqueue.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index a9f3cc02bdc1..55ebdd56a5de 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3394,7 +3394,6 @@ static void put_unbound_pool(struct worker_pool *pool)
* Because of how wq_manager_inactive() works, we will hold the
* spinlock after a successful wait.
*/
- raw_spin_lock_irq(&pool->lock);
rcuwait_wait_event(&manager_wait, wq_manager_inactive(pool),
TASK_UNINTERRUPTIBLE);
pool->flags |= POOL_MANAGER_ACTIVE;
--
2.35.6
Powered by blists - more mailing lists