lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Nov 2008 22:59:50 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Eric Dumazet <dada1@...mosbay.com>,
	David Miller <davem@...emloft.net>, rjw@...k.pl,
	linux-kernel@...r.kernel.org, kernel-testers@...r.kernel.org,
	cl@...ux-foundation.org, efault@....de, a.p.zijlstra@...llo.nl,
	Stephen Hemminger <shemminger@...tta.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: system_call() - Re: [Bug #11308] tbench regression on each kernel
	release from 2.6.22 -&gt; 2.6.28


* Ingo Molnar <mingo@...e.hu> wrote:

> 100.000000 total
> ................
>   1.508888 system_call

that's an easy one:

ffffffff8020be00:    97321 <system_call>:
ffffffff8020be00:    97321 	0f 01 f8             	swapgs 
ffffffff8020be03:    53089 	66 66 66 90          	xchg   %ax,%ax
ffffffff8020be07:     1524 	66 66 90             	xchg   %ax,%ax
ffffffff8020be0a:        0 	66 66 90             	xchg   %ax,%ax
ffffffff8020be0d:        0 	66 66 90             	xchg   %ax,%ax

ffffffff8020be10:     1511 <system_call_after_swapgs>:
ffffffff8020be10:     1511 	65 48 89 24 25 18 00 	mov    %rsp,%gs:0x18
ffffffff8020be17:        0 	00 00 
ffffffff8020be19:        0 	65 48 8b 24 25 10 00 	mov    %gs:0x10,%rsp
ffffffff8020be20:        0 	00 00 
ffffffff8020be22:     1490 	fb                   	sti    

syscall entry instruction costs - unavoidable security checks, etc. - 
hardware costs.

But looking at this profile made me notice this detail:

  ENTRY(system_call_after_swapgs)

Combined with this alignment rule we have in 
arch/x86/include/asm/linkage.h on 64-bit:

  #ifdef CONFIG_X86_64
  #define __ALIGN .p2align 4,,15
  #define __ALIGN_STR ".p2align 4,,15"
  #endif

while it inserts NOP sequences, that is still +13 bytes of excessive, 
stupid, and straight in our syscall entry path alignment padding.

system_call_after_swapgs is an utter slowpath in any case. The interim 
fix is below - although it needs more thinking and probably should be 
done via an ENTRY_UNALIGNED() method as well, for slowpath targets.

With that we get this much nicer entry sequence:

ffffffff8020be00:   544323 <system_call>:
ffffffff8020be00:   544323 	0f 01 f8             	swapgs 

ffffffff8020be03:   197954 <system_call_after_swapgs>:
ffffffff8020be03:   197954 	65 48 89 24 25 18 00 	mov    %rsp,%gs:0x18
ffffffff8020be0a:        0 	00 00 
ffffffff8020be0c:     6578 	65 48 8b 24 25 10 00 	mov    %gs:0x10,%rsp
ffffffff8020be13:        0 	00 00 
ffffffff8020be15:        0 	fb                   	sti    
ffffffff8020be16:        0 	48 83 ec 50          	sub    $0x50,%rsp

And we should probably weaken the generic code alignment rules as well 
on x86. I'll do some measurements of it.

	Ingo

Index: linux/arch/x86/kernel/entry_64.S
===================================================================
--- linux.orig/arch/x86/kernel/entry_64.S
+++ linux/arch/x86/kernel/entry_64.S
@@ -315,7 +315,8 @@ ENTRY(system_call)
 	 * after the swapgs, so that it can do the swapgs
 	 * for the guest and jump here on syscall.
 	 */
-ENTRY(system_call_after_swapgs)
+.globl system_call_after_swapgs
+system_call_after_swapgs:
 
 	movq	%rsp,%gs:pda_oldrsp 
 	movq	%gs:pda_kernelstack,%rsp
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ