lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 29 Dec 2023 20:58:15 +0000
From: David Laight <David.Laight@...LAB.COM>
To: "'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>,
	"'peterz@...radead.org'" <peterz@...radead.org>, "'longman@...hat.com'"
	<longman@...hat.com>
CC: "'mingo@...hat.com'" <mingo@...hat.com>, "'will@...nel.org'"
	<will@...nel.org>, "'boqun.feng@...il.com'" <boqun.feng@...il.com>, "'Linus
 Torvalds'" <torvalds@...ux-foundation.org>, "'xinhui.pan@...ux.vnet.ibm.com'"
	<xinhui.pan@...ux.vnet.ibm.com>,
	"'virtualization@...ts.linux-foundation.org'"
	<virtualization@...ts.linux-foundation.org>, 'Zeng Heng'
	<zengheng4@...wei.com>
Subject: [PATCH next 5/5] locking/osq_lock: Optimise vcpu_is_preempted()
 check.

The vcpu_is_preempted() test stops osq_lock() spinning if a virtual
  cpu is no longer running.
Although patched out for bare-metal the code still needs the cpu number.
Reading this from 'prev->cpu' is a pretty much guaranteed have a cache miss
when osq_unlock() is waking up the next cpu.

Instead save 'prev->cpu' in 'node->prev_cpu' and use that value instead.
Update in the osq_lock() 'unqueue' path when 'node->prev' is changed.

This is simpler than checking for 'node->prev' changing and caching
'prev->cpu'.

Signed-off-by: David Laight <david.laight@...lab.com>
---
 kernel/locking/osq_lock.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index b60b0add0161..89be63627434 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -14,8 +14,9 @@
 
 struct optimistic_spin_node {
 	struct optimistic_spin_node *self, *next, *prev;
-	int locked; /* 1 if lock acquired */
-	int cpu; /* encoded CPU # + 1 value */
+	int locked;    /* 1 if lock acquired */
+	int cpu;       /* encoded CPU # + 1 value */
+	int prev_cpu;  /* actual CPU # for vpcu_is_preempted() */
 };
 
 static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node, osq_node);
@@ -29,11 +30,6 @@ static inline int encode_cpu(int cpu_nr)
 	return cpu_nr + 1;
 }
 
-static inline int node_cpu(struct optimistic_spin_node *node)
-{
-	return node->cpu - 1;
-}
-
 static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val)
 {
 	int cpu_nr = encoded_cpu_val - 1;
@@ -114,6 +110,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 	if (old == OSQ_UNLOCKED_VAL)
 		return true;
 
+	node->prev_cpu = old - 1;
 	prev = decode_cpu(old);
 	node->prev = prev;
 	node->locked = 0;
@@ -148,7 +145,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 	 * polling, be careful.
 	 */
 	if (smp_cond_load_relaxed(&node->locked, VAL || need_resched() ||
-				  vcpu_is_preempted(node_cpu(node->prev))))
+				  vcpu_is_preempted(node->prev_cpu)))
 		return true;
 
 	/* unqueue */
@@ -205,6 +202,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 	 * it will wait in Step-A.
 	 */
 
+	WRITE_ONCE(next->prev_cpu, prev->cpu - 1);
 	WRITE_ONCE(next->prev, prev);
 	WRITE_ONCE(prev->next, next);
 
-- 
2.17.1

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ