[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190521214055.31060-6-kan.liang@linux.intel.com>
Date: Tue, 21 May 2019 14:40:51 -0700
From: kan.liang@...ux.intel.com
To: peterz@...radead.org, acme@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, jolsa@...nel.org, eranian@...gle.com,
alexander.shishkin@...ux.intel.com, ak@...ux.intel.com,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH 5/9] perf/x86/intel: Set correct weight for TopDown metrics events
From: Andi Kleen <ak@...ux.intel.com>
The topdown metrics and slots events are mapped to a fixed counter,
but should have the normal weight for the scheduler.
So special case this.
Signed-off-by: Andi Kleen <ak@...ux.intel.com>
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---
arch/x86/events/intel/core.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 2eec172765f4..6de9249acb28 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5281,6 +5281,15 @@ __init int intel_pmu_init(void)
* counter, so do not extend mask to generic counters
*/
for_each_event_constraint(c, x86_pmu.event_constraints) {
+ /*
+ * Don't limit the event mask for TopDown
+ * metrics and slots events.
+ */
+ if (x86_pmu.num_counters_fixed >= 3 &&
+ c->idxmsk64 & INTEL_PMC_MSK_ANY_SLOTS) {
+ c->weight = hweight64(c->idxmsk64);
+ continue;
+ }
if (c->cmask == FIXED_EVENT_FLAGS
&& c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES) {
c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1;
--
2.14.5
Powered by blists - more mailing lists