[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250626195610.405379-1-kan.liang@linux.intel.com>
Date: Thu, 26 Jun 2025 12:55:57 -0700
From: kan.liang@...ux.intel.com
To: peterz@...radead.org,
mingo@...hat.com,
acme@...nel.org,
namhyung@...nel.org,
tglx@...utronix.de,
dave.hansen@...ux.intel.com,
irogers@...gle.com,
adrian.hunter@...el.com,
jolsa@...nel.org,
alexander.shishkin@...ux.intel.com,
linux-kernel@...r.kernel.org
Cc: dapeng1.mi@...ux.intel.com,
ak@...ux.intel.com,
zide.chen@...el.com,
mark.rutland@....com,
broonie@...nel.org,
ravi.bangoria@....com,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [RFC PATCH V2 00/13] Support vector and more extended registers in perf
From: Kan Liang <kan.liang@...ux.intel.com>
Changes since V1:
- Apply the new interfaces to configure and dump the SIMD registers
- Utilize the existing FPU functions, e.g., xstate_calculate_size,
get_xsave_addr().
Starting from the Intel Ice Lake, the XMM registers can be collected in
a PEBS record. More registers, e.g., YMM, ZMM, OPMASK, SPP and APX, will
be added in the upcoming Architecture PEBS as well. But it requires the
hardware support.
The patch set provides a software solution to mitigate the hardware
requirement. It utilizes the XSAVES command to retrieve the requested
registers in the overflow handler. The feature isn't limited to the PEBS
event or specific platforms anymore.
The hardware solution (if available) is still preferred, since it has
low overhead (especially with the large PEBS) and is more accurate.
In theory, the solution should work for all X86 platforms. But I only
have newer Inter platforms to test. The patch set only enable the
feature for Intel Ice Lake and later platforms.
The new registers include YMM, ZMM, OPMASK, SSP, and APX.
The sample_regs_user/intr has run out. A new field in the
struct perf_event_attr is required for the registers.
After a long discussion in V1,
https://lore.kernel.org/lkml/3f1c9a9e-cb63-47ff-a5e9-06555fa6cc9a@linux.intel.com/
The new field looks like as below.
@@ -543,6 +545,25 @@ struct perf_event_attr {
__u64 sig_data;
__u64 config3; /* extension of config2 */
+
+
+ /*
+ * Defines set of SIMD registers to dump on samples.
+ * The sample_simd_regs_enabled !=0 implies the
+ * set of SIMD registers is used to config all SIMD registers.
+ * If !sample_simd_regs_enabled, sample_regs_XXX may be used to
+ * config some SIMD registers on X86.
+ */
+ union {
+ __u16 sample_simd_regs_enabled;
+ __u16 sample_simd_pred_reg_qwords;
+ };
+ __u32 sample_simd_pred_reg_intr;
+ __u32 sample_simd_pred_reg_user;
+ __u16 sample_simd_vec_reg_qwords;
+ __u64 sample_simd_vec_reg_intr;
+ __u64 sample_simd_vec_reg_user;
+ __u32 __reserved_4;
};
@@ -1016,7 +1037,15 @@ enum perf_event_type {
* } && PERF_SAMPLE_BRANCH_STACK
*
* { u64 abi; # enum perf_sample_regs_abi
- * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
+ * u64 regs[weight(mask)];
+ * struct {
+ * u16 nr_vectors;
+ * u16 vector_qwords;
+ * u16 nr_pred;
+ * u16 pred_qwords;
+ * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
+ * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
+ * } && PERF_SAMPLE_REGS_USER
*
* { u64 size;
* char data[size];
@@ -1043,7 +1072,15 @@ enum perf_event_type {
* { u64 data_src; } && PERF_SAMPLE_DATA_SRC
* { u64 transaction; } && PERF_SAMPLE_TRANSACTION
* { u64 abi; # enum perf_sample_regs_abi
- * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR
+ * u64 regs[weight(mask)];
+ * struct {
+ * u16 nr_vectors;
+ * u16 vector_qwords;
+ * u16 nr_pred;
+ * u16 pred_qwords;
+ * u64 data[nr_vectors * vector_qwords + nr_pred * pred_qwords];
+ * } && (abi & PERF_SAMPLE_REGS_ABI_SIMD)
+ * } && PERF_SAMPLE_REGS_INTR
* { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR
* { u64 cgroup;} && PERF_SAMPLE_CGROUP
* { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE
Since there is only one vector qwords field, the qwords for the newest
vector should be set by the tools. For example, if the end user wants
XMM0 and YMM1, the vector qwords should be 4. The vector mask should be
0x3. The YMM0 and YMM1 will be dumped to the userspace. It's the tool's
responsibility to output the XMM0 and YMM1 to the end user.
I had a POC perf tool patch for testing purposes. I didn't include it in
this RFC series. I will send a complete patch set (include both kernel
and perf tool), when the interface is accepted and there is no NAK for
the solution.
Kan Liang (13):
perf/x86: Use x86_perf_regs in the x86 nmi handler
perf/x86: Setup the regs data
x86/fpu/xstate: Add xsaves_nmi
perf: Move has_extended_regs() to header file
perf/x86: Support XMM register for non-PEBS and REGS_USER
perf: Support SIMD registers
perf/x86: Move XMM to sample_simd_vec_regs
perf/x86: Add YMM into sample_simd_vec_regs
perf/x86: Add ZMM into sample_simd_vec_regs
perf/x86: Add OPMASK into sample_simd_pred_reg
perf/x86: Add eGPRs into sample_regs
perf/x86: Add SSP into sample_regs
perf/x86/intel: Enable PERF_PMU_CAP_SIMD_REGS
arch/x86/events/core.c | 281 ++++++++++++++++++++++++--
arch/x86/events/intel/core.c | 73 ++++++-
arch/x86/events/intel/ds.c | 12 +-
arch/x86/events/perf_event.h | 32 +++
arch/x86/include/asm/fpu/xstate.h | 3 +
arch/x86/include/asm/perf_event.h | 30 ++-
arch/x86/include/uapi/asm/perf_regs.h | 44 +++-
arch/x86/kernel/fpu/xstate.c | 32 ++-
arch/x86/kernel/perf_regs.c | 105 ++++++++--
include/linux/perf_event.h | 21 ++
include/linux/perf_regs.h | 5 +
include/uapi/linux/perf_event.h | 46 ++++-
kernel/events/core.c | 97 ++++++++-
13 files changed, 731 insertions(+), 50 deletions(-)
--
2.38.1
Powered by blists - more mailing lists