[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210318172814.4400-2-longman@redhat.com>
Date: Thu, 18 Mar 2021 13:28:10 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Davidlohr Bueso <dave@...olabs.net>
Cc: linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
Waiman Long <longman@...hat.com>
Subject: [PATCH-tip 1/5] locking/ww_mutex: Revert "Treat ww_mutex_lock() like a trylock"
It turns out that treating ww_mutex_lock() as a trylock will fail to catch
real deadlock hazard like:
mutex_lock(&A); ww_mutex_lock(&B, ctx);
ww_mutex_lock(&B, ctx); mutex_lock(&A);
The current lockdep code should be able to handle mixed lock ordering
of ww_mutexes as long as
1) there is a top level nested lock that is acquired before hand, and
2) the nested lock and the ww_mutex are of the same lock class.
Any ww_mutex use cases that do not provide the above guarantee will
have to be modified to avoid lockdep problem.
Revert the previous commit b058f2e4d0a7 ("locking/ww_mutex: Treat
ww_mutex_lock() like a trylock").
Fixes: commit b058f2e4d0a7 ("locking/ww_mutex: Treat ww_mutex_lock() like a trylock")
Signed-off-by: Waiman Long <longman@...hat.com>
---
kernel/locking/mutex.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index bb89393cd3a2..622ebdfcd083 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -946,10 +946,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
}
preempt_disable();
- /*
- * Treat as trylock for ww_mutex.
- */
- mutex_acquire_nest(&lock->dep_map, subclass, !!ww_ctx, nest_lock, ip);
+ mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
if (__mutex_trylock(lock) ||
mutex_optimistic_spin(lock, ww_ctx, NULL)) {
--
2.18.1
Powered by blists - more mailing lists