lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200121154009.11993-8-longman@redhat.com>
Date:   Tue, 21 Jan 2020 10:40:09 -0500
From:   Waiman Long <longman@...hat.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Will Deacon <will.deacon@....com>
Cc:     linux-kernel@...r.kernel.org, Bart Van Assche <bvanassche@....org>,
        Waiman Long <longman@...hat.com>
Subject: [PATCH v4 7/7] locking/lockdep: Add a fast path for chain_hlocks allocation

When alloc_chain_hlocks() is called, the most likely scenario is
to allocate from the primordial chain block which holds the whole
chain_hlocks[] array initially. It is the primordial chain block if its
size is bigger than MAX_LOCK_DEPTH. As long as the number of entries left
after splitting is still bigger than MAX_CHAIN_BUCKETS it will remain
in bucket 0. By splitting out a sub-block at the end, we only need to
adjust the size without changing any of the existing linkage information.
This optimized fast path can reduce the latency of allocation requests.

This patch does change the order by which chain_hlocks entries are
allocated. The original code allocates entries from the beginning of
the array. Now it will be allocated from the end of the array backward.

Signed-off-by: Waiman Long <longman@...hat.com>
---
 kernel/locking/lockdep.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 6d0f6a256d63..12148bb6d2c1 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2702,15 +2702,19 @@ static inline int chain_block_size(int offset)
 	return (chain_hlocks[offset + 2] << 16) | chain_hlocks[offset + 3];
 }
 
+static inline void init_chain_block_size(int offset, int size)
+{
+	chain_hlocks[offset + 2] = size >> 16;
+	chain_hlocks[offset + 3] = (u16)size;
+}
+
 static inline void init_chain_block(int offset, int next, int bucket, int size)
 {
 	chain_hlocks[offset] = (next >> 16) | CHAIN_BLK_FLAG;
 	chain_hlocks[offset + 1] = (u16)next;
 
-	if (bucket == 0) {
-		chain_hlocks[offset + 2] = size >> 16;
-		chain_hlocks[offset + 3] = (u16)size;
-	}
+	if (bucket == 0)
+		init_chain_block_size(offset, size);
 }
 
 static inline void add_chain_block(int offset, int size)
@@ -2810,6 +2814,18 @@ static int alloc_chain_hlocks(int req)
 			return curr;
 		}
 
+		/*
+		 * Fast path: splitting out a sub-block at the end of the
+		 * primordial chain block.
+		 */
+		if (likely((size > MAX_LOCK_DEPTH) &&
+			   (size - req > MAX_CHAIN_BUCKETS))) {
+			size -= req;
+			nr_free_chain_hlocks -= req;
+			init_chain_block_size(curr, size);
+			return curr + size;
+		}
+
 		if (size > max_size) {
 			max_prev = prev;
 			max_curr = curr;
-- 
2.18.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ