[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190829083132.22394-9-duyuyang@gmail.com>
Date: Thu, 29 Aug 2019 16:31:10 +0800
From: Yuyang Du <duyuyang@...il.com>
To: peterz@...radead.org, will.deacon@....com, mingo@...nel.org
Cc: bvanassche@....org, ming.lei@...hat.com, frederic@...nel.org,
tglx@...utronix.de, linux-kernel@...r.kernel.org,
longman@...hat.com, paulmck@...ux.vnet.ibm.com,
boqun.feng@...il.com, Yuyang Du <duyuyang@...il.com>
Subject: [PATCH v4 08/30] locking/lockdep: Skip checks if direct dependency is already present
Given a dependency <prev> -> <next>, two checks are performed:
1. Lock inversion deadlock:
We search whether there is a path from <next> to <prev> in the dependency
graph and if so we have a potential deadlock scenario in check_deadlock_graph().
But if the direct dependency <prev> -> <next> is already in the graph, there
can't be such a path (i.e., <next> to <prev>) because otherwise this path
would have been found when adding the last critical dependency that
completes the circle.
2. IRQ usage violation:
The IRQ usage check searches whether there is a path through <prev> to
<next> that connects an irq-safe lock to an irq-unsafe lock in the
dependency graph in check_irq_usage(). Similarly, if <prev> -> <next> is
already in the graph, there can't be such a path either.
This check skipping should be able to greatly improve performance by
reducing the number of deadlock and IRQ usage checks. This number precisely
equals nr_redundant, which actually is not a small number.
Signed-off-by: Yuyang Du <duyuyang@...il.com>
---
kernel/locking/lockdep.c | 34 +++++++++++++++++++---------------
1 file changed, 19 insertions(+), 15 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 4838c99..de088da 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2433,6 +2433,25 @@ static inline void inc_chains(void)
}
/*
+ * Is the <prev> -> <next> dependency already present?
+ *
+ * (this may occur even though this is a new chain: consider
+ * e.g. the L1 -> L2 -> L3 -> L4 and the L5 -> L1 -> L2 -> L3
+ * chains - the second one will be new, but L1 already has
+ * L2 added to its dependency list, due to the first chain.)
+ */
+ list_for_each_entry(entry, &hlock_class(prev)->locks_after, entry) {
+ if (entry->class == hlock_class(next)) {
+ debug_atomic_inc(nr_redundant);
+
+ if (distance == 1)
+ entry->distance = 1;
+
+ return 1;
+ }
+ }
+
+ /*
* Prove that the new <prev> -> <next> dependency would not
* create a deadlock scenario in the graph. (We do this by
* a breadth-first search into the graph starting at <next>,
@@ -2459,21 +2478,6 @@ static inline void inc_chains(void)
*/
if (next->read == 2 || prev->read == 2)
return 1;
- /*
- * Is the <prev> -> <next> dependency already present?
- *
- * (this may occur even though this is a new chain: consider
- * e.g. the L1 -> L2 -> L3 -> L4 and the L5 -> L1 -> L2 -> L3
- * chains - the second one will be new, but L1 already has
- * L2 added to its dependency list, due to the first chain.)
- */
- list_for_each_entry(entry, &hlock_class(prev)->locks_after, entry) {
- if (entry->class == hlock_class(next)) {
- if (distance == 1)
- entry->distance = 1;
- return 1;
- }
- }
if (!*trace) {
*trace = save_trace();
--
1.8.3.1
Powered by blists - more mailing lists