[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <160414383158.397.3025016969633820018.tip-bot2@tip-bot2>
Date: Sat, 31 Oct 2020 11:30:31 -0000
From: "tip-bot2 for Peter Zijlstra" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Chris Wilson <chris@...is-wilson.co.uk>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [tip: locking/urgent] lockdep: Fix nr_unused_locks accounting
The following commit has been merged into the locking/urgent branch of tip:
Commit-ID: 1a39340865ce505a029b37aeb47a3e4c8db5f6c6
Gitweb: https://git.kernel.org/tip/1a39340865ce505a029b37aeb47a3e4c8db5f6c6
Author: Peter Zijlstra <peterz@...radead.org>
AuthorDate: Tue, 27 Oct 2020 13:48:34 +01:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Fri, 30 Oct 2020 17:07:18 +01:00
lockdep: Fix nr_unused_locks accounting
Chris reported that commit 24d5a3bffef1 ("lockdep: Fix
usage_traceoverflow") breaks the nr_unused_locks validation code
triggered by /proc/lockdep_stats.
By fully splitting LOCK_USED and LOCK_USED_READ it becomes a bad
indicator for accounting nr_unused_locks; simplyfy by using any first
bit.
Fixes: 24d5a3bffef1 ("lockdep: Fix usage_traceoverflow")
Reported-by: Chris Wilson <chris@...is-wilson.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Tested-by: Chris Wilson <chris@...is-wilson.co.uk>
Link: https://lkml.kernel.org/r/20201027124834.GL2628@hirez.programming.kicks-ass.net
---
kernel/locking/lockdep.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 1102849..b71ad8d 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -4396,6 +4396,9 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
if (unlikely(hlock_class(this)->usage_mask & new_mask))
goto unlock;
+ if (!hlock_class(this)->usage_mask)
+ debug_atomic_dec(nr_unused_locks);
+
hlock_class(this)->usage_mask |= new_mask;
if (new_bit < LOCK_TRACE_STATES) {
@@ -4403,19 +4406,10 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this,
return 0;
}
- switch (new_bit) {
- case 0 ... LOCK_USED-1:
+ if (new_bit < LOCK_USED) {
ret = mark_lock_irq(curr, this, new_bit);
if (!ret)
return 0;
- break;
-
- case LOCK_USED:
- debug_atomic_dec(nr_unused_locks);
- break;
-
- default:
- break;
}
unlock:
Powered by blists - more mailing lists