lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1266443901-3646-1-git-send-email-zamsden@redhat.com>
Date:	Wed, 17 Feb 2010 11:58:21 -1000
From:	Zachary Amsden <zamsden@...hat.com>
To:	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	x86@...nel.org, Avi Kivity <avi@...hat.com>,
	Zachary Amsden <zamsden@...hat.com>
Subject: [PATCH] x86 rwsem optimization extreme

The x86 instruction set provides the ability to add an additional
bit into addition or subtraction by using the carry flag.
It also provides instructions to directly set or clear the
carry flag.  By forcibly setting the carry flag, we can then
represent one particular 64-bit constant, namely

   0xffffffff + 1 = 0x100000000

using only 32-bit values.  In particular we can optimize the rwsem
write lock release by noting it is of exactly this form.

The old instruction sequence:

0000000000000073 <downgrade_write>:
  73:	55                   	push   %rbp
  74:	48 ba 00 00 00 00 01 	mov    $0x100000000,%rdx
  7b:	00 00 00
  7e:	48 89 f8             	mov    %rdi,%rax
  81:	48 89 e5             	mov    %rsp,%rbp
  84:	f0 48 01 10          	lock add %rdx,(%rax)
  88:	79 05                	jns    8f <downgrade_write+0x1c>
  8a:	e8 00 00 00 00       	callq  8f <downgrade_write+0x1c>
  8f:	c9                   	leaveq
  90:	c3                   	retq

The new instruction sequence:

0000000000000073 <downgrade_write>:
  73:	55                   	push   %rbp
  74:	ba ff ff ff ff       	mov    $0xffffffff,%edx
  79:	48 89 f8             	mov    %rdi,%rax
  7c:	48 89 e5             	mov    %rsp,%rbp
  7f:	f9                   	stc
  80:	f0 48 11 10          	lock adc %rdx,(%rax)
  84:	79 05                	jns    8b <downgrade_write+0x18>
  86:	e8 00 00 00 00       	callq  8b <downgrade_write+0x18>
  8b:	c9                   	leaveq
  8c:	c3                   	retq

Thus we can save a huge amount of space, chiefly, the four extra
bytes required for a 64-bit constant and REX prefix over a 32-bit
constant load and forced carry.

Measured performance impact on Xeon cores is nil; 10e7 loops of
either sequence produces no noticable cycle count difference, with
random variation favoring neither.

Update: measured performance impact on AMD Turion core is also nil.

Signed-off-by: Zachary Amsden <zamsden@...hat.com>
---
 arch/x86/include/asm/asm.h   |    1 +
 arch/x86/include/asm/rwsem.h |   23 ++++++++++++++++++-----
 2 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index b3ed1e1..3744038 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -25,6 +25,7 @@
 #define _ASM_INC	__ASM_SIZE(inc)
 #define _ASM_DEC	__ASM_SIZE(dec)
 #define _ASM_ADD	__ASM_SIZE(add)
+#define _ASM_ADC	__ASM_SIZE(adc)
 #define _ASM_SUB	__ASM_SIZE(sub)
 #define _ASM_XADD	__ASM_SIZE(xadd)
 
diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index 606ede1..147adaf 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -233,18 +233,31 @@ static inline void __up_write(struct rw_semaphore *sem)
 static inline void __downgrade_write(struct rw_semaphore *sem)
 {
 	asm volatile("# beginning __downgrade_write\n\t"
+#ifdef CONFIG_X86_64
+#if RWSEM_WAITING_BIAS != -0x100000000
+# error "This code assumes RWSEM_WAITING_BIAS == -2^32"
+#endif
+		     "  stc\n\t"
+		     LOCK_PREFIX _ASM_ADC "%2,(%1)\n\t"
+		     /* transitions 0xZZZZZZZZ00000001 -> 0xYYYYYYYY00000001 */
+		     "  jns       1f\n\t"
+		     "  call call_rwsem_downgrade_wake\n"
+		     "1:\n\t"
+		     "# ending __downgrade_write\n"
+		     : "+m" (sem->count)
+		     : "a" (sem), "r" (-RWSEM_WAITING_BIAS-1)
+		     : "memory", "cc");
+#else
 		     LOCK_PREFIX _ASM_ADD "%2,(%1)\n\t"
-		     /*
-		      * transitions 0xZZZZ0001 -> 0xYYYY0001 (i386)
-		      *     0xZZZZZZZZ00000001 -> 0xYYYYYYYY00000001 (x86_64)
-		      */
+		     /* transitions 0xZZZZ0001 -> 0xYYYY0001 */
 		     "  jns       1f\n\t"
 		     "  call call_rwsem_downgrade_wake\n"
 		     "1:\n\t"
 		     "# ending __downgrade_write\n"
 		     : "+m" (sem->count)
-		     : "a" (sem), "er" (-RWSEM_WAITING_BIAS)
+		     : "a" (sem), "i" (-RWSEM_WAITING_BIAS)
 		     : "memory", "cc");
+#endif
 }
 
 /*
-- 
1.6.6

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ