lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <tip-9156e545765e467e6268c4814cfa609ebb16237e@git.kernel.org>
Date:   Tue, 25 Jun 2019 01:47:34 -0700
From:   tip-bot for Kobe Wu <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     kobe-cp.wu@...iatek.com, hpa@...or.com,
        linux-kernel@...r.kernel.org, peterz@...radead.org,
        mingo@...nel.org, tglx@...utronix.de,
        linux-mediatek@...ts.infradead.org, torvalds@...ux-foundation.org,
        wsd_upstream@...iatek.com, eason-yh.lin@...iatek.com,
        will.deacon@....com
Subject: [tip:locking/core] locking/lockdep: increase size of counters for
 lockdep statistics

Commit-ID:  9156e545765e467e6268c4814cfa609ebb16237e
Gitweb:     https://git.kernel.org/tip/9156e545765e467e6268c4814cfa609ebb16237e
Author:     Kobe Wu <kobe-cp.wu@...iatek.com>
AuthorDate: Mon, 24 Jun 2019 16:35:48 +0800
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Tue, 25 Jun 2019 10:17:08 +0200

locking/lockdep: increase size of counters for lockdep statistics

When system has been running for a long time, signed integer
counters are not enough for some lockdep statistics. Using
unsigned long counters can satisfy the requirement. Besides,
most of lockdep statistics are unsigned. It is better to use
unsigned int instead of int.

Remove unused variables.
- max_recursion_depth
- nr_cyclic_check_recursions
- nr_find_usage_forwards_recursions
- nr_find_usage_backwards_recursions

Signed-off-by: Kobe Wu <kobe-cp.wu@...iatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: <linux-mediatek@...ts.infradead.org>
Cc: <wsd_upstream@...iatek.com>
Cc: Eason Lin <eason-yh.lin@...iatek.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Will Deacon <will.deacon@....com>
Link: https://lkml.kernel.org/r/1561365348-16050-1-git-send-email-kobe-cp.wu@mediatek.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 kernel/locking/lockdep_internals.h | 36 ++++++++++++++++--------------------
 1 file changed, 16 insertions(+), 20 deletions(-)

diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 150ec3f0c5b5..cc83568d5012 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -131,7 +131,6 @@ extern unsigned int nr_hardirq_chains;
 extern unsigned int nr_softirq_chains;
 extern unsigned int nr_process_chains;
 extern unsigned int max_lockdep_depth;
-extern unsigned int max_recursion_depth;
 
 extern unsigned int max_bfs_queue_depth;
 
@@ -160,25 +159,22 @@ lockdep_count_backward_deps(struct lock_class *class)
  * and we want to avoid too much cache bouncing.
  */
 struct lockdep_stats {
-	int	chain_lookup_hits;
-	int	chain_lookup_misses;
-	int	hardirqs_on_events;
-	int	hardirqs_off_events;
-	int	redundant_hardirqs_on;
-	int	redundant_hardirqs_off;
-	int	softirqs_on_events;
-	int	softirqs_off_events;
-	int	redundant_softirqs_on;
-	int	redundant_softirqs_off;
-	int	nr_unused_locks;
-	int	nr_redundant_checks;
-	int	nr_redundant;
-	int	nr_cyclic_checks;
-	int	nr_cyclic_check_recursions;
-	int	nr_find_usage_forwards_checks;
-	int	nr_find_usage_forwards_recursions;
-	int	nr_find_usage_backwards_checks;
-	int	nr_find_usage_backwards_recursions;
+	unsigned long  chain_lookup_hits;
+	unsigned int   chain_lookup_misses;
+	unsigned long  hardirqs_on_events;
+	unsigned long  hardirqs_off_events;
+	unsigned long  redundant_hardirqs_on;
+	unsigned long  redundant_hardirqs_off;
+	unsigned long  softirqs_on_events;
+	unsigned long  softirqs_off_events;
+	unsigned long  redundant_softirqs_on;
+	unsigned long  redundant_softirqs_off;
+	int            nr_unused_locks;
+	unsigned int   nr_redundant_checks;
+	unsigned int   nr_redundant;
+	unsigned int   nr_cyclic_checks;
+	unsigned int   nr_find_usage_forwards_checks;
+	unsigned int   nr_find_usage_backwards_checks;
 
 	/*
 	 * Per lock class locking operation stat counts

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ