[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190513091203.7299-15-duyuyang@gmail.com>
Date: Mon, 13 May 2019 17:12:00 +0800
From: Yuyang Du <duyuyang@...il.com>
To: peterz@...radead.org, will.deacon@....com, mingo@...nel.org
Cc: bvanassche@....org, ming.lei@...hat.com, frederic@...nel.org,
tglx@...utronix.de, boqun.feng@...il.com,
linux-kernel@...r.kernel.org, Yuyang Du <duyuyang@...il.com>
Subject: [PATCH 14/17] locking/lockdep: Support recursive read locks
Now we are good to finally support recursive read locks.
This is done simply by adding the dependency that has recursive locks to the
graph, which previously is skipped.
The reason is plainly simple:
(a). Recursive read lock differs from read lock only inside a task.
(b). check_deadlock_current() already handles recursive-read lock pretty
well.
Now that the bulk of the implementation of the read-write lock deadlock
detection algorithm is done, the lockdep internal performance statistics can
be collected (the workload used as usual is: make clean; reboot; make vmlinux -j8):
Before
------
direct dependencies: 8980 [max: 32768]
indirect dependencies: 41065
all direct dependencies: 211016
dependency chains: 11592 [max: 65536]
dependency chain hlocks: 42869 [max: 327680]
in-hardirq chains: 85
in-softirq chains: 385
in-process chains: 10250
stack-trace entries: 123121 [max: 524288]
combined max dependencies: 340292196
max bfs queue depth: 244
chain lookup misses: 12703
chain lookup hits: 666620822
cyclic checks: 11392
redundant checks: 0
After
-----
direct dependencies: 9624 [max: 32768]
indirect dependencies: 43759
all direct dependencies: 216024
dependency chains: 12119 [max: 65536]
dependency chain hlocks: 44654 [max: 327680]
in-hardirq chains: 88
in-softirq chains: 383
in-process chains: 10544
stack-trace entries: 132362 [max: 524288]
combined max dependencies: 360385920
max bfs queue depth: 250
chain lookup misses: 13528
chain lookup hits: 664721852
cyclic checks: 12053
redundant checks: 0
Most noticeably, we have slightly more dependencies, chains, and chain
lookup misses, but none of them raises concerns.
Signed-off-by: Yuyang Du <duyuyang@...il.com>
---
kernel/locking/lockdep.c | 47 +++++++++++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 20 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 417b23d..69d6bd6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1805,6 +1805,9 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
.parent = NULL,
};
+ if (DEBUG_LOCKS_WARN_ON(hlock_class(src) == hlock_class(target)))
+ return 0;
+
debug_atomic_inc(nr_cyclic_checks);
while (true) {
@@ -1861,9 +1864,17 @@ static inline void set_lock_type2(struct lock_list *lock, int read)
break;
/*
+ * This previous lock has the same class as
+ * the src (the next lock to acquire); this
+ * must be a recursive read case. Skip.
+ */
+ if (hlock_class(prev) == hlock_class(src))
+ continue;
+
+ /*
* Since the src lock (the next lock to
- * acquire) is neither recursive nor nested
- * lock, so this prev class cannot be src
+ * acquire) is not nested lock, so up to
+ * now this prev class cannot be the src
* class, then does the path have this
* previous lock?
*
@@ -2609,6 +2620,17 @@ static inline void inc_chains(void)
}
/*
+ * Filter out the direct dependency with the same lock class, which
+ * is legitimate only if both the locks have the recursive read
+ * type, otherwise we should not have been here in the first place.
+ */
+ if (hlock_class(prev) == hlock_class(next)) {
+ WARN_ON_ONCE(next->read != LOCK_TYPE_RECURSIVE);
+ WARN_ON_ONCE(prev->read != LOCK_TYPE_RECURSIVE);
+ return 2;
+ }
+
+ /*
* Prove that the new <prev> -> <next> dependency would not
* create a deadlock scenario in the graph. (We do this by
* a breadth-first search into the graph starting at <next>,
@@ -2626,16 +2648,6 @@ static inline void inc_chains(void)
return 0;
/*
- * For recursive read-locks we do all the dependency checks,
- * but we dont store read-triggered dependencies (only
- * write-triggered dependencies). This ensures that only the
- * write-side dependencies matter, and that if for example a
- * write-lock never takes any other locks, then the reads are
- * equivalent to a NOP.
- */
- if (next->read == LOCK_TYPE_RECURSIVE || prev->read == LOCK_TYPE_RECURSIVE)
- return 1;
- /*
* Is the <prev> -> <next> dependency already present?
*
* (this may occur even though this is a new chain: consider
@@ -2734,11 +2746,7 @@ static inline void inc_chains(void)
int distance = curr->lockdep_depth - depth + 1;
hlock = curr->held_locks + depth - 1;
- /*
- * Only non-recursive-read entries get new dependencies
- * added:
- */
- if (hlock->read != LOCK_TYPE_RECURSIVE && hlock->check) {
+ if (hlock->check) {
int ret = check_prev_add(curr, hlock, next, distance,
&trace);
if (!ret)
@@ -3131,10 +3139,9 @@ static int validate_chain(struct task_struct *curr,
/*
* Add dependency only if this lock is not the head
- * of the chain, and if it's not a secondary read-lock:
+ * of the chain, and if it's not a nest lock:
*/
- if (!chain_head && ret != LOCK_TYPE_RECURSIVE &&
- ret != LOCK_TYPE_NEST) {
+ if (!chain_head && ret != LOCK_TYPE_NEST) {
if (!check_prevs_add(curr, hlock))
return 0;
}
--
1.8.3.1
Powered by blists - more mailing lists