lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200526174349.8312-1-longman@redhat.com>
Date:   Tue, 26 May 2020 13:43:49 -0400
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Will Deacon <will.deacon@....com>
Cc:     linux-kernel@...r.kernel.org, Qian Cai <cai@....pw>,
        Waiman Long <longman@...hat.com>
Subject: [PATCH] locking/lockdep: Increase MAX_LOCKDEP_ENTRIES by half

It was found by Qian Cai that lockdep splat sometimes appears with the
"BUG: MAX_LOCKDEP_ENTRIES too low" message on linux-next. On a 32-vcpu VM
guest with a v5.7-rc7 based kernel, I looked at how many of the various
table entries were being used after bootup and after a parallel kernel
build (make -j32). The tables below show the usage statistics.

  After bootup:

  Table               Used        Max      %age
  -----               ----        ---      ----
  lock_classes[]      1834       8192      22.4
  list_entries[]     15646      32768      47.7
  lock_chains[]      20873      65536      31.8
  chain_hlocks[]     83199     327680      25.4
  stack_trace[]     146177     524288      27.9

  After parallel kernel build:

  Table               Used        Max      %age
  -----               ----        ---      ----
  lock_classes[]      1864       8192      22.8
  list_entries[]     17134      32768      52.3
  lock_chains[]      25196      65536      38.4
  chain_hlocks[]    106321     327680      32.4
  stack_trace[]     158700     524288      30.3

Percentage-wise, it can be seen that the list_entries for direct
dependencies are used much more than the other tables. So it is also
the table that is mostly likely to run out of space when running a
compex workload.

To reduce the likelihood of running out of table entries, we can increase
MAX_LOCKDEP_ENTRIES by 50% from 16384/32768 to 24576/49152.  On a 64-bit
architecture, that represents an increase in memory consumption of
917504 bytes. With that change, the percentage usage of list_entries[]
will fall to 31.8% and 34.9% respectively to make them more in line
with the other tables.

Signed-off-by: Waiman Long <longman@...hat.com>
---
 kernel/locking/lockdep_internals.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index baca699b94e9..6108d2fbe775 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -89,12 +89,12 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ =
  * table (if it's not there yet), and we check it for lock order
  * conflicts and deadlocks.
  */
-#define MAX_LOCKDEP_ENTRIES	16384UL
+#define MAX_LOCKDEP_ENTRIES	24576UL
 #define MAX_LOCKDEP_CHAINS_BITS	15
 #define MAX_STACK_TRACE_ENTRIES	262144UL
 #define STACK_TRACE_HASH_SIZE	8192
 #else
-#define MAX_LOCKDEP_ENTRIES	32768UL
+#define MAX_LOCKDEP_ENTRIES	49152UL
 
 #define MAX_LOCKDEP_CHAINS_BITS	16
 
-- 
2.18.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ