lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 09 May 2013 16:29:16 -0700
From:	Dave Hansen <dave@...1.net>
To:	Andi Kleen <ak@...ux.intel.com>,
	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>, eranian@...gle.com
Subject: NMIs induced by 'perf top' hogging all CPU time

If I boot a recent kernel (bb9055b) and run 'perf top' on my machine, it
hangs.  It is 100% reproducible; it happens every single time.  If I'm
lucky, I'll get some of the hardlockup detection messages on the console.

A little bit of digging showed that the reason it hangs is that we're
seeing an average x86 PMU NMI take around 0.5ms (and *up* to about 1ms)
to complete, and doing thousands of them a second.  IOW, we're spending
virtually all the CPU time just processing perf events in NMI context.

When we get a bunch of these in a row, and they happen to occur while
running an hrtimer, the hrtimer code notices and complains:

	[ 1623.552970] hrtimer: interrupt took 20,981,718 ns

My personal "best" is:

	[  273.247983] hrtimer: interrupt took 1,015,221,558 ns

My suspicion is that my particular machine is just really slow at poking
the PMU counters for some reason.  It could be its NUMA properties or
something even more subtle.  But, the result is still that it falls flat
on its face with the defaults.  Passing "-c" to perf to manually raise
the sample period works around this, but it's _awfully_ easy to bring
down a system this way.

Any ideas what we can do here?  I'm planning on trying to track down the
source of _why_ the NMIs are so slow, but it does seem like we should be
able to notice and back off on the NMI rate when we spend so much time
in there.

Andi suggested collecting some data about how the kernel was tuning the
sample period over time:

	https://www.sr71.net/~dave/intel/perf-hangs-201305/f1.html

There is one obviously goofy-looking points in there where the period
got set to 96!  We'd get another NMI 96 cycles after we exit the current
one!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ