[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1617635467-181510-8-git-send-email-kan.liang@linux.intel.com>
Date: Mon, 5 Apr 2021 08:10:49 -0700
From: kan.liang@...ux.intel.com
To: peterz@...radead.org, mingo@...nel.org,
linux-kernel@...r.kernel.org
Cc: acme@...nel.org, tglx@...utronix.de, bp@...en8.de,
namhyung@...nel.org, jolsa@...hat.com, ak@...ux.intel.com,
yao.jin@...ux.intel.com, alexander.shishkin@...ux.intel.com,
adrian.hunter@...el.com, ricardo.neri-calderon@...ux.intel.com,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH V5 07/25] perf/x86: Hybrid PMU support for unconstrained
From: Kan Liang <kan.liang@...ux.intel.com>
The unconstrained value depends on the number of GP and fixed counters.
Each hybrid PMU should use its own unconstrained.
Suggested-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---
arch/x86/events/intel/core.c | 5 ++++-
arch/x86/events/perf_event.h | 1 +
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 33d26ed..39f57ae 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3147,7 +3147,10 @@ x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
}
}
- return &unconstrained;
+ if (!is_hybrid() || !cpuc->pmu)
+ return &unconstrained;
+
+ return &hybrid_pmu(cpuc->pmu)->unconstrained;
}
static struct event_constraint *
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 993f0de..cfb2da0 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -639,6 +639,7 @@ struct x86_hybrid_pmu {
int max_pebs_events;
int num_counters;
int num_counters_fixed;
+ struct event_constraint unconstrained;
};
static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
--
2.7.4
Powered by blists - more mailing lists