[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1341261646-3171-3-git-send-email-andi@firstfloor.org>
Date: Mon, 2 Jul 2012 13:40:43 -0700
From: Andi Kleen <andi@...stfloor.org>
To: a.p.zijlstra@...llo.nl
Cc: x86@...nel.org, eranian@...gle.com, linux-kernel@...r.kernel.org,
Andi Kleen <ak@...ux.intel.com>
Subject: [PATCH 2/5] perf, x86: Enable PDIR precise instruction profiling on IvyBridge
From: Andi Kleen <ak@...ux.intel.com>
Even with precise profiling Intel CPUs have a "skid". The sample
triggers a few cycles later than the instruction, so in some
cases there can be systematic errors where expensive instructions never
show up in the profile log.
Sandy Bridge added a new PDIR instruction retired event that randomizes
the sampling slightly. This corrects for systematic errors, so that
you should in most cases see the correct instruction getting profile hits.
Unfortunately the SandyBridge version could only work with a otherwise
quiescent CPU and was difficult to use. But now on IvyBridge this
restriction is gone and can be more widely used.
This only works for retired instructions.
I enabled it -- somewhat arbitarily -- for two 'p's or more.
To use it
perf record -e instructions:pp ...
This provides a more precise alternative to the usual cycles:pp,
however it will not account for expensive instructions.
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
---
arch/x86/kernel/cpu/perf_event_intel.c | 25 ++++++++++++++++++++++++-
1 files changed, 24 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index f060dd5..2c045c8 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1425,6 +1425,29 @@ static int intel_pmu_hw_config(struct perf_event *event)
return 0;
}
+static int pdir_hw_config(struct perf_event *event)
+{
+ int err = intel_pmu_hw_config(event);
+
+ if (err)
+ return err;
+
+ /*
+ * Use the PDIR instruction retired counter for two 'p's.
+ * This will randomize samples slightly and avoid some systematic
+ * measurement errors.
+ * Only works for retired instructions.
+ */
+ if (event->attr.precise_ip >= 2 &&
+ (event->hw.config & X86_RAW_EVENT_MASK) == 0xc0) {
+ u64 pdir_event = X86_CONFIG(.event=0xc0, .umask=1);
+ event->hw.config = pdir_event |
+ (event->hw.config & ~X86_RAW_EVENT_MASK);
+ }
+
+ return 0;
+}
+
struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr)
{
if (x86_pmu.guest_get_msrs)
@@ -1956,7 +1979,7 @@ __init int intel_pmu_init(void)
/* UOPS_DISPATCHED.THREAD,c=1,i=1 to count stall cycles*/
intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] =
X86_CONFIG(.event=0xb1, .umask=0x01, .inv=1, .cmask=1);
-
+
x86_pmu.hw_config = pdir_hw_config;
pr_cont("IvyBridge events, ");
--
1.7.7.6
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists