lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 Oct 2013 14:28:40 -0700
From:	Andi Kleen <ak@...ux.intel.com>
To:	Don Zickus <dzickus@...hat.com>
Cc:	dave.hansen@...ux.intel.com, a.p.zijlstra@...llo.nl,
	eranian@...gle.com, jmario@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: x86, perf: throttling issues with long nmi latencies

On Mon, Oct 14, 2013 at 04:35:49PM -0400, Don Zickus wrote:
> I have been playing with quad socket Ivy Bridges for awhile and have seen
> numerous "perf samples too long" messages, to the point, the machine is
> unusable for any perf analyzing.

We've seen the same problem on our large systems. Dave 
did some fixes in mainline, but they only work around the problem.

One main cause I believe is that dynamic period, which often 
goes down to insanely low values for cycles.

This also causes a lot of measurement overhead, without really giving better
data.

If you use -c ... with a reasonable period the problem completely
goes away (with pmu-tools ocperf stat -c default sets a reasonable default)

> So I tried to investigate the source of the NMI latencies using the
> traditional 'rdtscll()' command.  That failed miserably.  Then it was
> pointed out to me that rdtscll() is terrible for benchmarking due to
> out-of-order execution by the Intel processors.  This Intel whitepaper
> describes a better way using cpuid and rdtsc:

We just used ftrace function tracer.

> the longest one first.  It seems to be 'copy_user_from_nmi'
> 
> intel_pmu_handle_irq ->
> 	intel_pmu_drain_pebs_nhm ->
> 		__intel_pmu_drain_pebs_nhm ->
> 			__intel_pmu_pebs_event ->
> 				intel_pmu_pebs_fixup_ip ->
> 					copy_from_user_nmi
> 
> In intel_pmu_pebs_fixup_ip(), if the while-loop goes over 50, the sum of
> all the copy_from_user_nmi latencies seems to go over 1,000,000 cycles

fixup_ip has to decode a whole basic block, to correct off by one.
I'm not sure why the copy dominates though. But copy_from_user_nmi
does a lot of nasty things.

I would just use :p which skips this. The single instruction correction 
is not worth all the overhead, and  there is always more skid anyways
even with the correction.

The good news is that Haswell fixes the overhead, :pp is as fast as :p

> (there are some cases where only 10 iterations are needed to go that high
> too, but in generall over 50 or so).  At this point copy_user_from_nmi
> seems to account for over 90% of the nmi latency.

Yes saw the same. It's unclear why it is that expensive.
I've also seen the copy dominate with -g.

Also for some reason it seems to hurt much more on larger systems
(cache misses?) Unfortunately it's hard to use perf to analyze
perf, that was the road block last time I understanding this better.

One guess was that if you profile the same code running on many
cores the copy*user_nmi code will have a very hot cache line
with the page reference count.

Some obvious improvements are likely possible:

The copy function is pretty dumb -- for example it repins the pages
for each access. It would be likely much faster to batch that
and only do it once per backtrace/decode. This would need
a new interface.

I suppose there would be a way to do this access without actually
incrementing the ref count (e.g. with a seqlock like scheme
or just using TSX)

But if you don't do the IP correction and only the stack access
in theory it should be possible to avoid the majority of changes.

First level recommendations:

- Always use -c ... / or -F ..., NEVER dynamic period
- Don't use :pp

-Andi

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ