lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180324110534.t52m5gvn4r7kvmnj@gmail.com>
Date:   Sat, 24 Mar 2018 12:05:34 +0100
From:   Ingo Molnar <mingo@...nel.org>
To:     Dave Hansen <dave.hansen@...ux.intel.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Andrew Lutomirski <luto@...nel.org>,
        Kees Cook <keescook@...gle.com>,
        Hugh Dickins <hughd@...gle.com>,
        Jürgen Groß <jgross@...e.com>,
        the arch/x86 maintainers <x86@...nel.org>, namit@...are.com
Subject: Re: [PATCH 00/11] Use global pages with PTI


* Dave Hansen <dave.hansen@...ux.intel.com> wrote:

> This is time doing a modestly-sized kernel compile on a 4-core Skylake
> desktop.
> 
>                         User Time       Kernel Time     Clock Elapsed
> Baseline ( 0 GLB PTEs)  803.79          67.77           237.30
> w/series (28 GLB PTEs)  807.70 (+0.7%)  68.07 (+0.7%)   238.07 (+0.3%)
> 
> Without PCIDs, it behaves the way I would expect.
>
> I'll ask around, but I'm open to any ideas about what the heck might be
> causing this.

Hm, so it's a bit weird that while user time and kernel time both increased by 
about 0.7%, elapsed time only increased by 0.3%? Typically kernel builds are much 
more parallel for that to be typical, so maybe there's some noise in the 
measurement?

Before spending too much time on the global-TLB patch angle I'd suggest investing 
a bit of time into making sure that the regression you are seeing is actually 
real:

You haven't described how you have measured kernel build times and "+0.7% 
regression" might turn out to be the real number, but sub-1% accuracy kernel build 
times are *awfully* susceptible to:

 - various sources of noise

 - systematic statistical errors which doesn't show up as 
   measurement-to-measurement noise but which skews the results:
   such as the boot-to-boot memory layout of the source code and
   object files.

 - cpufreq artifacts

Even repeated builds with 'make clean' inbetween can be misleading because the 
exact layout of key include files and binaries which get accessed the most often 
during a build are set into stone once they've been read into the page cache for 
the first time after bootup. Automated reboots between measurements can be 
misleading as well, if the file layout after bootup is too deterministic.

So here's a pretty reliable way to measure kernel build time, which tries to avoid 
the various pitfalls of caching.

First I make sure that cpufreq is set to 'performance':

  for ((cpu=0; cpu<120; cpu++)); do
    G=/sys/devices/system/cpu/cpu$cpu/cpufreq/scaling_governor
    [ -f $G ] && echo performance > $G
  done

[ ... because it can be *really* annoying to discover that an ostensible 
  performance regression was a cpufreq artifact ... again. ;-) ]

Then I copy a kernel tree to /tmp (ramfs) as root:

	cd /tmp
	rm -rf linux
	git clone ~/linux linux
	cd linux
	make defconfig >/dev/null
	
... and then we can build the kernel in such a loop (as root again):

  perf stat --repeat 10 --null --pre			'\
	cp -a kernel ../kernel.copy.$(date +%s);	 \
	rm -rf *;					 \
	git checkout .;					 \
	echo 1 > /proc/sys/vm/drop_caches;		 \
	find ../kernel* -type f | xargs cat >/dev/null;  \
	make -j kernel >/dev/null;			 \
	make clean >/dev/null 2>&1;			 \
	sync						'\
							 \
	make -j16 >/dev/null

( I have tested these by pasting them into a terminal. Adjust the ~/linux source 
  git tree and the '-j16' to your system. )

Notes:

 - the 'pre' script portion is not timed by 'perf stat', only the raw build times

 - we flush all caches via drop_caches and re-establish everything again, but:

 - we also introduce an intentional memory leak by slowly filling up ramfs with 
   copies of 'kernel/', thus continously changing the layout of free memory, 
   cached data such as compiler binaries and the source code hierarchy. (Note 
   that the leak is about 8MB per iteration, so it isn't massive.)

With 10 iterations this is the statistical stability I get this on a big box:

 Performance counter stats for 'make -j128 kernel' (10 runs):

      26.346436425 seconds time elapsed    (+- 0.19%)

... which, despite a high iteration count of 10, is still surprisingly noisy, 
right?

A 0.2% stddev is probably not enough to call a 0.7% regression with good 
confidence, so I had to use *30* iterations to make measurement noise to be about 
an order of magnitude lower than the effect I'm trying to measure:

 Performance counter stats for 'make -j128' (30 runs):

      26.334767571 seconds time elapsed    (+- 0.09% )

i.e. "26.334 +- 0.023" seconds is a number we can have pretty high confidence in, 
on this system.

And just to demonstrate that it's all real, I repeated the whole 30-iteration 
measurement again:

 Performance counter stats for 'make -j128' (30 runs):

      26.311166142 seconds time elapsed    (+- 0.07%)

Even if in the end you get a similar result, close to the +0.7% overhead you 
already measured, we should have more confidence in blaming global TLBs for the 
performance regression.

BYMMV.

Thanks,

	Ingo

[*] Note that even this doesn't eliminate certain sources of measurement error: 
    such as the boot-to-boot variance in the layout of certain key kernel data
    structures - but kernel builds are mostly user-space dominated, so drop_caches 
    should be good enough.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ