lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 29 Nov 2007 12:01:53 -0800
From:	"H. Peter Anvin" <hpa@...or.com>
To:	Ingo Molnar <mingo@...e.hu>
CC:	Andi Kleen <andi@...stfloor.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andi Kleen <ak@...e.de>, Chuck Ebbert <cebbert@...hat.com>,
	Roland McGrath <roland@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Jeremy Fitzhardinge <jeremy@...p.org>, zach@...are.com
Subject: Re: [PATCH x86/mm 6/6] x86-64 ia32 ptrace get/putreg32 current task

Ingo Molnar wrote:
> * Andi Kleen <andi@...stfloor.org> wrote:
> 
>> For i386 iirc Jeremy/Zach did the benchmarking and they settled on %fs 
>> because it was faster for something (originally it was %gs too)
> 
> yep. IIRC, some CPUs only optimize %fs because that's what Windows uses 
> and leaves Linux with %gs out in the cold. There's also a performance 
> penalty for overlapping segment use, if the segment cache is single 
> entry only with an additional optimization for NULL [which just hides 
> the segment cache].
> 

For the 32-bit case, which is the only one that can be changed at all:

I guess, specifically, that assuming a sysenter implementation (meaning 
CS is handled ad hoc by the sysenter/sysexit instructions) we have 
USER_DS, KERNEL_DS, and the kernel thread pointer.  If the segments 
don't overlap, the user thread pointer gets loaded once per exec or task 
switch, and doesn't change in between.  If they do, the user thread 
pointer has to be reloaded on system call exit.

A nonzero segment load involves a memory reference followed by 
data-dependent traps on that reference, so the amount of reordering the 
CPU can do to hide that latency is limited.  A zero segment load doesn't 
  perform the memory reference at all.

Note that a segment cache (a proper cache, not the segment descriptor 
registers that the Intel docs bogusly call a "cache") does *not* save 
the memory reference, since if the descriptor has changed in memory it 
*has* to be honoured; it only allows it to be performed lazily (assume 
the cache is valid, then throw an internal exception and don't commit 
state if the descriptor stored in the cache tag doesn't match the 
descriptor loaded from memory.)

	-hpa

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ