[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f02cf04-74bf-46e5-8104-a62d4aca2bfd@redhat.com>
Date: Wed, 26 Mar 2025 14:26:53 -0400
From: Waiman Long <llong@...hat.com>
To: Boqun Feng <boqun.feng@...il.com>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Cc: Will Deacon <will@...nel.org>, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH] locking: lockdep: Decrease nr_unused_locks if lock unused
in zap_class()
On 3/26/25 2:08 PM, Boqun Feng wrote:
> Currently, when a lock class is allocated, nr_unused_locks will be
> increased by 1, until it gets used: nr_unused_locks will be decreased by
> 1 in mark_lock(). However, one scenario is missed: a lock class may be
> zapped without even being used once. This could result into a situation
> that nr_unused_locks != 0 but no unused lock class is active in the
> system, and when `cat /proc/lockdep_stats`, a WARN_ON() will
> be triggered in a CONFIG_DEBUG_LOCKDEP=y kernel:
>
> [...] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
> [...] WARNING: CPU: 41 PID: 1121 at kernel/locking/lockdep_proc.c:283 lockdep_stats_show+0xba9/0xbd0
>
> And as a result, lockdep will be disabled after this.
>
> Therefore, nr_unused_locks needs to be accounted correctly at
> zap_class() time.
>
> Cc: stable@...r.kernel.org
> Signee-off-by: Boqun Feng <boqun.feng@...il.com>
Typo: "Signee-off-by"?
Other than that, LGTM
Reviewed-by: Waiman Long <longman@...hat.com>
> ---
> kernel/locking/lockdep.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index b15757e63626..686546d52337 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -6264,6 +6264,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
> hlist_del_rcu(&class->hash_entry);
> WRITE_ONCE(class->key, NULL);
> WRITE_ONCE(class->name, NULL);
> + /* class allocated but not used, -1 in nr_unused_locks */
> + if (class->usage_mask == 0)
> + debug_atomic_dec(nr_unused_locks);
> nr_lock_classes--;
> __clear_bit(class - lock_classes, lock_classes_in_use);
> if (class - lock_classes == max_lock_class_idx)
Powered by blists - more mailing lists