lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 4 Apr 2015 08:42:55 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	"H. Peter Anvin" <hpa@...or.com>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Borislav Petkov <bp@...en8.de>,
	Andy Lutomirski <luto@...capital.net>,
	Oleg Nesterov <oleg@...hat.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Alexei Starovoitov <ast@...mgrid.com>,
	Will Drewry <wad@...omium.org>,
	Kees Cook <keescook@...omium.org>,
	the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86/asm/entry/64: pack interrupt dispatch table tighter


* H. Peter Anvin <hpa@...or.com> wrote:

> On 04/03/2015 11:37 AM, Linus Torvalds wrote:
> > On Fri, Apr 3, 2015 at 11:35 AM, H. Peter Anvin <hpa@...or.com> wrote:
> >>
> >> For the record, I actually measured the impact of the jump-to-jump when
> >> I wrote it.  It has a small, *but measurable*, positive impact.
> > 
> > What did you compare against, and how did you measure that? I don't
> > see how it could *possibly* be faster than just a simple aligned "push
> > + jmp".
> > 
> 
> I wish I remembered the exact details; it took a fair bit of gathering
> numbers as the spread was quite a bit wider than the delta, but in the
> end there were two distribution peaks clearly offset.
> 
> I seem to remember it involving a loop running RDTSC continuously and
> another RDTSC in the interrupt path.

So the thing is, while I don't know how you've loaded the machine, if 
user-space is doing nothing but looping in RDTSC, the kernel I$ might 
become cache hot easily not just in L2 but in L1 cache as well.

But 'when the machine is not doing anything' is not what we optimize 
for, we (try to) optimize for the case when there's a lot of work 
going on and the async context (irq, fault, etc.) I$ is likely 
cache-cold.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists