[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250613134943.3186517-13-kan.liang@linux.intel.com>
Date: Fri, 13 Jun 2025 06:49:43 -0700
From: kan.liang@...ux.intel.com
To: peterz@...radead.org,
mingo@...hat.com,
acme@...nel.org,
namhyung@...nel.org,
tglx@...utronix.de,
dave.hansen@...ux.intel.com,
irogers@...gle.com,
adrian.hunter@...el.com,
jolsa@...nel.org,
alexander.shishkin@...ux.intel.com,
linux-kernel@...r.kernel.org
Cc: dapeng1.mi@...ux.intel.com,
ak@...ux.intel.com,
zide.chen@...el.com,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [RFC PATCH 12/12] perf/x86/intel: Support extended registers
From: Kan Liang <kan.liang@...ux.intel.com>
Support YMM, APX, OPMASK, ZMM, and SSP if there is XSAVES support.
Disable large PEBS if the extended regs are required.
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---
arch/x86/events/intel/core.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 5706ee562684..4218067b1843 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4035,6 +4035,8 @@ static unsigned long intel_pmu_large_pebs_flags(struct perf_event *event)
flags &= ~PERF_SAMPLE_REGS_USER;
if (event->attr.sample_regs_user & ~PEBS_GP_REGS)
flags &= ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR);
+ if (event_has_extended_regs2(event))
+ flags &= ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR);
return flags;
}
@@ -5295,6 +5297,26 @@ static void intel_extended_regs_init(struct pmu *pmu)
x86_pmu.ext_regs_mask |= BIT_ULL(X86_EXT_REGS_XMM);
x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_EXTENDED_REGS;
+
+ if (boot_cpu_has(X86_FEATURE_AVX) &&
+ cpu_has_xfeatures(XFEATURE_MASK_YMM, NULL))
+ x86_pmu.ext_regs_mask |= BIT_ULL(X86_EXT_REGS_YMM);
+ if (boot_cpu_has(X86_FEATURE_APX) &&
+ cpu_has_xfeatures(XFEATURE_MASK_APX, NULL))
+ x86_pmu.ext_regs_mask |= BIT_ULL(X86_EXT_REGS_APX);
+ if (boot_cpu_has(X86_FEATURE_AVX512F)) {
+ if (cpu_has_xfeatures(XFEATURE_MASK_OPMASK, NULL))
+ x86_pmu.ext_regs_mask |= BIT_ULL(X86_EXT_REGS_OPMASK);
+ if (cpu_has_xfeatures(XFEATURE_MASK_ZMM_Hi256, NULL))
+ x86_pmu.ext_regs_mask |= BIT_ULL(X86_EXT_REGS_ZMMH);
+ if (cpu_has_xfeatures(XFEATURE_MASK_Hi16_ZMM, NULL))
+ x86_pmu.ext_regs_mask |= BIT_ULL(X86_EXT_REGS_H16ZMM);
+ }
+ if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK))
+ x86_pmu.ext_regs_mask |= BIT_ULL(X86_EXT_REGS_CET);
+
+ if (x86_pmu.ext_regs_mask != BIT_ULL(X86_EXT_REGS_XMM))
+ x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_EXTENDED_REGS2;
}
static void update_pmu_cap(struct pmu *pmu)
--
2.38.1
Powered by blists - more mailing lists