lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 28 Dec 2014 01:11:23 -0800
From:	Davidlohr Bueso <dave@...olabs.net>
To:	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Davidlohr Bueso <dave@...olabs.net>,
	linux-kernel@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH 8/8] locking/osq: No need for load/acquire when acquire-polling

Both mutexes and rwsems took a performance hit when we switched
over from the original mcs code to the cancelable variant (osq).
The reason being the use of smp_load_acquire() when polling for
node->locked. Paul describes the scenario nicely:
https://lkml.org/lkml/2013/11/19/405

The smp_load_acquire() when unqueuing make sense. In addition,
we don't need to worry about leaking the critical region as
osq is only used internally.

This impacts both regular and large levels of concurrency and
hardware, ie on a 40 core system with a disk intensive workload:

disk-1               804.83 (  0.00%)      828.16 (  2.90%)
disk-61             8063.45 (  0.00%)    18181.82 (125.48%)
disk-121            7187.41 (  0.00%)    20119.17 (179.92%)
disk-181            6933.32 (  0.00%)    20509.91 (195.82%)
disk-241            6850.81 (  0.00%)    20397.80 (197.74%)
disk-301            6815.22 (  0.00%)    20287.58 (197.68%)
disk-361            7080.40 (  0.00%)    20205.22 (185.37%)
disk-421            7076.13 (  0.00%)    19957.33 (182.04%)
disk-481            7083.25 (  0.00%)    19784.06 (179.31%)
disk-541            7038.39 (  0.00%)    19610.92 (178.63%)
disk-601            7072.04 (  0.00%)    19464.53 (175.23%)
disk-661            7010.97 (  0.00%)    19348.23 (175.97%)
disk-721            7069.44 (  0.00%)    19255.33 (172.37%)
disk-781            7007.58 (  0.00%)    19103.14 (172.61%)
disk-841            6981.18 (  0.00%)    18964.22 (171.65%)
disk-901            6968.47 (  0.00%)    18826.72 (170.17%)
disk-961            6964.61 (  0.00%)    18708.02 (168.62%)

Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
 kernel/locking/osq_lock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 9c6e251..d10dfb9 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -109,7 +109,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 	 * cmpxchg in an attempt to undo our queueing.
 	 */
 
-	while (!smp_load_acquire(&node->locked)) {
+	while (!READ_ONCE(node->locked)) {
 		/*
 		 * If we need to reschedule bail... so we can block.
 		 */
-- 
2.1.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ