[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1548106951-4811-4-git-send-email-kan.liang@linux.intel.com>
Date: Mon, 21 Jan 2019 13:42:30 -0800
From: kan.liang@...ux.intel.com
To: x86@...nel.org, linux-kernel@...r.kernel.org, tglx@...utronix.de,
bp@...en8.de, peterz@...radead.org, mingo@...hat.com
Cc: ak@...ux.intel.com, eranian@...gle.com,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH V6 4/5] perf/x86/intel: Clean up counter freezing quirk
From: Kan Liang <kan.liang@...ux.intel.com>
Clean up counter freezing quirk to use the new facility to check for
min microcode revisions.
Rename the counter freezing quirk related functions. Because other
platforms, e.g. Goldmont, also needs to call the quirk.
Only check the boot CPU, assuming models and features are consistent
over all CPUs.
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---
Changes since V5:
- Small changes in commit message
- Apply the new name
arch/x86/events/intel/core.c | 27 +++++++++++----------------
1 file changed, 11 insertions(+), 16 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index a3c7845..bad19a9 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3917,23 +3917,18 @@ static __init void intel_nehalem_quirk(void)
}
}
-static bool intel_glp_counter_freezing_broken(int cpu)
-{
- u32 rev = UINT_MAX; /* default to broken for unknown stepping */
-
- switch (cpu_data(cpu).x86_stepping) {
- case 1:
- rev = 0x28;
- break;
- case 8:
- rev = 0x6;
- break;
- }
+static const struct x86_cpu_desc counter_freezing_ucodes[] = {
+ INTEL_CPU_DESC(INTEL_FAM6_ATOM_GOLDMONT_PLUS, 1, 0x00000028),
+ INTEL_CPU_DESC(INTEL_FAM6_ATOM_GOLDMONT_PLUS, 8, 0x00000006),
+ {}
+};
- return (cpu_data(cpu).microcode < rev);
+static bool intel_counter_freezing_broken(void)
+{
+ return !x86_cpu_has_min_microcode_rev(counter_freezing_ucodes);
}
-static __init void intel_glp_counter_freezing_quirk(void)
+static __init void intel_counter_freezing_quirk(void)
{
/* Check if it's already disabled */
if (disable_counter_freezing)
@@ -3943,7 +3938,7 @@ static __init void intel_glp_counter_freezing_quirk(void)
* If the system starts with the wrong ucode, leave the
* counter-freezing feature permanently disabled.
*/
- if (intel_glp_counter_freezing_broken(raw_smp_processor_id())) {
+ if (intel_counter_freezing_broken()) {
pr_info("PMU counter freezing disabled due to CPU errata,"
"please upgrade microcode\n");
x86_pmu.counter_freezing = false;
@@ -4325,7 +4320,7 @@ __init int intel_pmu_init(void)
break;
case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
- x86_add_quirk(intel_glp_counter_freezing_quirk);
+ x86_add_quirk(intel_counter_freezing_quirk);
memcpy(hw_cache_event_ids, glp_hw_cache_event_ids,
sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, glp_hw_cache_extra_regs,
--
2.7.4
Powered by blists - more mailing lists