[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190228171242.32144-12-frederic@kernel.org>
Date: Thu, 28 Feb 2019 18:12:16 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
"David S . Miller" <davem@...emloft.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
Ingo Molnar <mingo@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>
Subject: [PATCH 11/37] locking/lockdep: Save stack trace for each softirq vector involved
We are going to save as much traces as we have softirq vectors involved
in a given usage. Expand the stack trace record code accordingly.
Reviewed-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
Cc: Mauro Carvalho Chehab <mchehab+samsung@...nel.org>
Cc: Joel Fernandes <joel@...lfernandes.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Pavan Kondeti <pkondeti@...eaurora.org>
Cc: Paul E . McKenney <paulmck@...ux.vnet.ibm.com>
Cc: David S . Miller <davem@...emloft.net>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
---
include/linux/lockdep.h | 3 ++-
kernel/locking/lockdep.c | 14 +++++++++++++-
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 06669f20a30a..69d2dac3d821 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -31,8 +31,9 @@ extern int lock_stat;
/*
* We'd rather not expose kernel/lockdep_states.h this wide, but we do need
* the total number of states... :-(
+ * 1 bit for LOCK_USED, 4 bits for hardirqs and 4 * NR_SOFTIRQS bits
*/
-#define XXX_LOCK_USAGE_STATES (1+2*4)
+#define XXX_LOCK_USAGE_STATES (1 + (1 + 10) * 4)
/*
* NR_LOCKDEP_CACHING_CLASSES ... Number of classes
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 9a5f2dbc3812..a369e7de3ade 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3147,6 +3147,18 @@ static inline int separate_irq_context(struct task_struct *curr,
#endif /* defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) */
+static int save_trace_mask(struct lock_class *class, u64 mask)
+{
+ int bit;
+
+
+ for_each_bit_nr(mask, bit)
+ if (!save_trace(class->usage_traces + bit))
+ return -1;
+
+ return 0;
+}
+
/*
* Mark a lock with a usage bit, and validate the state transition:
*/
@@ -3174,7 +3186,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
hlock_class(this)->usage_mask |= new_mask;
- if (!save_trace(hlock_class(this)->usage_traces + new_usage->bit))
+ if (save_trace_mask(hlock_class(this), new_mask) < 0)
return 0;
switch (new_usage->bit) {
--
2.21.0
Powered by blists - more mailing lists