[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170925221848.6646-7-boqun.feng@gmail.com>
Date: Tue, 26 Sep 2017 06:18:40 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: linux-kernel@...r.kernel.org
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Gautham R Shenoy <ego@...ux.vnet.ibm.com>,
Byungchul Park <byungchul.park@....com>,
Boqun Feng <boqun.feng@...il.com>
Subject: [RFC tip/locking/lockdep v3 06/14] lockdep: Support deadlock detection for recursive read in check_noncircular()
Currently, lockdep only has limit support for deadlock detection for
recursive read locks.
The basic idea of the detection is:
Since we make __bfs() able to traverse only the strong dependency paths,
so we report a circular deadlock if we could find a circle of a strong
dependency path.
Signed-off-by: Boqun Feng <boqun.feng@...il.com>
---
kernel/locking/lockdep.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 9e7647e40918..a68f7df8adc5 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1342,6 +1342,14 @@ static inline int class_equal(struct lock_list *entry, void *data)
return entry->class == data;
}
+static inline int hlock_conflict(struct lock_list *entry, void *data)
+{
+ struct held_lock *hlock = (struct held_lock *)data;
+
+ return hlock_class(hlock) == entry->class &&
+ (hlock->read != 2 || !entry->is_rr);
+}
+
static noinline int print_circular_bug(struct lock_list *this,
struct lock_list *target,
struct held_lock *check_src,
@@ -1456,18 +1464,18 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
}
/*
- * Prove that the dependency graph starting at <entry> can not
+ * Prove that the dependency graph starting at <root> can not
* lead to <target>. Print an error and return BFS_RMATCH if it does.
*/
static noinline enum bfs_result
-check_noncircular(struct lock_list *root, struct lock_class *target,
+check_noncircular(struct lock_list *root, struct held_lock *target,
struct lock_list **target_entry)
{
enum bfs_result result;
debug_atomic_inc(nr_cyclic_checks);
- result = __bfs_forwards(root, target, class_equal, target_entry);
+ result = __bfs_forwards(root, target, hlock_conflict, target_entry);
return result;
}
@@ -1998,7 +2006,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
* keep the stackframe size of the recursive functions low:
*/
bfs_init_root(&this, next);
- ret = check_noncircular(&this, hlock_class(prev), &target_entry);
+ ret = check_noncircular(&this, prev, &target_entry);
if (unlikely(ret == BFS_RMATCH))
return print_circular_bug(&this, target_entry, next, prev, trace);
else if (unlikely(bfs_error(ret)))
--
2.14.1
Powered by blists - more mailing lists