[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190516080015.16033-6-duyuyang@gmail.com>
Date: Thu, 16 May 2019 16:00:03 +0800
From: Yuyang Du <duyuyang@...il.com>
To: peterz@...radead.org, will.deacon@....com, mingo@...nel.org
Cc: bvanassche@....org, ming.lei@...hat.com, frederic@...nel.org,
tglx@...utronix.de, boqun.feng@...il.com, paulmck@...ux.ibm.com,
linux-kernel@...r.kernel.org, Yuyang Du <duyuyang@...il.com>
Subject: [PATCH v2 05/17] locking/lockdep: Rename deadlock check functions
Deadlock checks are performed at two places:
- Within current's held lock stack, check for lock recursion deadlock.
- Within dependency graph, check for lock inversion deadlock.
Rename the two relevant functions for later use. Plus, with read locks,
dependency circle in graph is not a sufficient condition for lock
inversion deadlocks anymore, so check_noncircular() is not entirely
accurate.
No functional change.
Signed-off-by: Yuyang Du <duyuyang@...il.com>
---
kernel/locking/lockdep.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1f1cb21..f4982ad 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1771,8 +1771,8 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
* Print an error and return 0 if it does.
*/
static noinline int
-check_noncircular(struct held_lock *src, struct held_lock *target,
- struct lock_trace *trace)
+check_deadlock_graph(struct held_lock *src, struct held_lock *target,
+ struct lock_trace *trace)
{
int ret;
struct lock_list *uninitialized_var(target_entry);
@@ -2385,7 +2385,8 @@ static inline void inc_chains(void)
}
/*
- * Check whether we are holding such a class already.
+ * Check whether we are holding such a class already in current
+ * context's held lock stack.
*
* (Note that this has to be done separately, because the graph cannot
* detect such classes of deadlocks.)
@@ -2396,7 +2397,7 @@ static inline void inc_chains(void)
* 3: LOCK_TYPE_RECURSIVE on recursive read
*/
static int
-check_deadlock(struct task_struct *curr, struct held_lock *next)
+check_deadlock_current(struct task_struct *curr, struct held_lock *next)
{
struct held_lock *prev;
struct held_lock *nest = NULL;
@@ -2480,7 +2481,7 @@ static inline void inc_chains(void)
/*
* Prove that the new <prev> -> <next> dependency would not
- * create a circular dependency in the graph. (We do this by
+ * create a deadlock scenario in the graph. (We do this by
* a breadth-first search into the graph starting at <next>,
* and check whether we can reach <prev>.)
*
@@ -2488,7 +2489,7 @@ static inline void inc_chains(void)
* MAX_CIRCULAR_QUEUE_SIZE) which keeps track of a breadth of nodes
* in the graph whose neighbours are to be checked.
*/
- ret = check_noncircular(next, prev, trace);
+ ret = check_deadlock_graph(next, prev, trace);
if (unlikely(ret <= 0))
return 0;
@@ -2983,7 +2984,7 @@ static int validate_chain(struct task_struct *curr,
* The simple case: does the current hold the same lock
* already?
*/
- int ret = check_deadlock(curr, hlock);
+ int ret = check_deadlock_current(curr, hlock);
if (!ret)
return 0;
--
1.8.3.1
Powered by blists - more mailing lists