lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150401132103.GB13492@gmail.com>
Date:	Wed, 1 Apr 2015 15:21:03 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Denys Vlasenko <dvlasenk@...hat.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Borislav Petkov <bp@...en8.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	Andy Lutomirski <luto@...capital.net>,
	Oleg Nesterov <oleg@...hat.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Alexei Starovoitov <ast@...mgrid.com>,
	Will Drewry <wad@...omium.org>,
	Kees Cook <keescook@...omium.org>, x86@...nel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/9] x86/asm/entry/32: Use PUSH instructions to build
 pt_regs on stack


* Denys Vlasenko <dvlasenk@...hat.com> wrote:

> On 04/01/2015 10:51 AM, Ingo Molnar wrote:
> > 
> > * Denys Vlasenko <dvlasenk@...hat.com> wrote:
> > 
> >> This mimics the recent similar 64-bit change.
> >> Saves ~110 bytes of code.
> >>
> >> Patch was run-tested on 32 and 64 bits, Intel and AMD CPU.
> >> I also looked at the diff of entry_64.o disassembly, to have
> >> a different view of the changes.
> > 
> > The other important question would be: what performance difference (if 
> > any) did you observe before/after the change?
> 
> I did not measure it then.
> 
> At the moment I don't have AMD CPUs here, cant benchmark
> 32-bit syscall-based codepath.
> 
> On a Sandy Bridge CPU (IOW: sysenter codepath) -
> 
> Before: 78.57 ns per getpid
> After:  76.90 ns per getpid
> 
> It's better than I thought it would be.
> Probably because this load:
> 
> movl	ASM_THREAD_INFO(TI_sysenter_return, %rsp, 0), %r10d
> 
> has been moved up by the patch (happens sooner).

There's also less I$ used, and in straight, continuous spots, which 
should result in less cache misses in the very common "the kernel's 
code is cache cold" situation that syscall entry operates under - and 
that's not captured by your benchmark.

So it's a good change.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ