[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1265209951.24455.640.camel@laptop>
Date: Wed, 03 Feb 2010 16:12:31 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Stephane Eranian <eranian@...gle.com>
Cc: Ingo Molnar <mingo@...e.hu>, Paul Mackerras <paulus@...ba.org>,
"Metzger, Markus T" <markus.t.metzger@...el.com>,
lkml <linux-kernel@...r.kernel.org>,
Robert Richter <robert.richter@....com>,
"David S. Miller" <davem@...emloft.net>,
Jamie Iles <jamie.iles@...ochip.com>,
Paul Mundt <lethal@...ux-sh.org>,
Arjan van de Ven <arjan@...radead.org>,
"H. Peter Anvin" <hpa@...or.com>, perfmon2-devel@...ts.sf.net
Subject: Re: [RFC][PATCH] perf_events, x86: PEBS support
On Wed, 2010-02-03 at 15:54 +0100, Stephane Eranian wrote:
>
> PEBS is still very useful because it guarantees the state you capture
> is at retirement of an instruction which caused the event.
>
> PEBS also gets way more interesting on Nehalem because of the
> ability to capture where cache misses occur. That's the load latency
> feature. You need to support that.
Simple things first. But yeah, we'll get to load-latency eventually.
> I believe you would need to abstract this in a generic fashion so it
> could be used on other architectures, such as AMD with IBS.
Right, Robert said he was working on IBS, I've still not made up my mind
on how to represent IBS properly, its a bit of a weird thing.
> On Nehalem, it requires the following:
>
> - only works if you sample on MEM_INST_RETIRED:LATENCY_ABOVE_THRESHOLD.
Yeah, and then you get to decode the data source thingy, not really a
nice interface. Also, it mostly contains L3 information, not L2/L1.
> - the threshold must be programmed into a dedicated MSR. The extra
> difficulty is that this MSR is shared between CPU when HT is on.
Lovely :/ One way is to program it to the lowest of the two and simply
discard events afterwards.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists