[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200116182112.20782-1-kan.liang@linux.intel.com>
Date: Thu, 16 Jan 2020 10:21:12 -0800
From: kan.liang@...ux.intel.com
To: peterz@...radead.org, acme@...hat.com, mingo@...nel.org,
linux-kernel@...r.kernel.org
Cc: ak@...ux.intel.com, eranian@...gle.com,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [RESEND PATCH V2] perf/x86/intel: Avoid unnecessary PEBS_ENABLE MSR access in PMI
From: Kan Liang <kan.liang@...ux.intel.com>
The perf PMI handler, intel_pmu_handle_irq(), currently does
unnecessary MSR accesses for PEBS_ENABLE MSR in
__intel_pmu_enable/disable_all() when PEBS is enabled.
When entering the handler, global ctrl is explicitly disabled. All
counters do not count anymore. It doesn't matter if PEBS is enabled
or not in a PMI handler.
Furthermore, for most cases, the cpuc->pebs_enabled is not changed in
PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need to
be changed either when exiting the handler.
PMI throttle may change the PEBS status during PMI handler. The
x86_pmu_stop() ends up in intel_pmu_pebs_disable() which can update
cpuc->pebs_enabled. But the PEBS_ENABLE MSR will be updated as well when
cpuc->enabled == 1. No need to access PEBS_ENABLE MSR again for this
case when exiting the handler.
A PMI may land after cpuc->enabled=0 in x86_pmu_disable() and
PMI throttle may be triggered for the PMI. For this rare case,
intel_pmu_pebs_disable() will not touch PEBS_ENABLE MSR. The patch
explicitly disable the PEBS for this case.
Use ftrace to measure the duration of intel_pmu_handle_irq() on BDX.
#perf record -e cycles:P -- ./tchain_edit
The average duration of intel_pmu_handle_irq()
Without the patch 1.144 us
With the patch 1.025 us
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---
Changes since V1:
- Update description and comments
- Handle a rare case. The PMI may land after cpuc->enabled=0 in
x86_pmu_disable() and PMI throttle may be triggered for the PMI.
arch/x86/events/intel/core.c | 22 +++++++++++++++++++---
1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index bc6468329c52..18e1132c0fd7 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -1963,6 +1963,14 @@ static __initconst const u64 knl_hw_cache_extra_regs
* intel_bts events don't coexist with intel PMU's BTS events because of
* x86_add_exclusive(x86_lbr_exclusive_lbr); there's no need to keep them
* disabled around intel PMU's event batching etc, only inside the PMI handler.
+ *
+ * Avoid PEBS_ENABLE MSR access in PMIs.
+ * The GLOBAL_CTRL has been disabled. All the counters do not count anymore.
+ * It doesn't matter if the PEBS is enabled or not.
+ * Usually, the PEBS status are not changed in PMIs. It's unnecessary to
+ * access PEBS_ENABLE MSR in disable_all()/enable_all().
+ * However, there are some cases which may change PEBS status, e.g. PMI
+ * throttle. The PEBS_ENABLE should be updated where the status changes.
*/
static void __intel_pmu_disable_all(void)
{
@@ -1972,13 +1980,12 @@ static void __intel_pmu_disable_all(void)
if (test_bit(INTEL_PMC_IDX_FIXED_BTS, cpuc->active_mask))
intel_pmu_disable_bts();
-
- intel_pmu_pebs_disable_all();
}
static void intel_pmu_disable_all(void)
{
__intel_pmu_disable_all();
+ intel_pmu_pebs_disable_all();
intel_pmu_lbr_disable_all();
}
@@ -1986,7 +1993,6 @@ static void __intel_pmu_enable_all(int added, bool pmi)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
- intel_pmu_pebs_enable_all();
intel_pmu_lbr_enable_all(pmi);
wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL,
x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask);
@@ -2004,6 +2010,7 @@ static void __intel_pmu_enable_all(int added, bool pmi)
static void intel_pmu_enable_all(int added)
{
+ intel_pmu_pebs_enable_all();
__intel_pmu_enable_all(added, false);
}
@@ -2620,6 +2627,15 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
handled++;
x86_pmu.drain_pebs(regs);
status &= x86_pmu.intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
+
+ /*
+ * PMI may land after cpuc->enabled=0 in x86_pmu_disable() and
+ * PMI throttle may be triggered for the PMI.
+ * For this rare case, intel_pmu_pebs_disable() will not touch
+ * MSR_IA32_PEBS_ENABLE. Explicitly disable the PEBS here.
+ */
+ if (unlikely(!cpuc->enabled && !cpuc->pebs_enabled))
+ wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
}
/*
--
2.17.1
Powered by blists - more mailing lists