[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <174413909831.31282.3118773367374848387.tip-bot2@tip-bot2>
Date: Tue, 08 Apr 2025 19:04:58 -0000
From: "tip-bot2 for Kan Liang" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Kan Liang <kan.liang@...ux.intel.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Thomas Falcon <thomas.falcon@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: perf/core] perf: Extend the bit width of the arch-specific flag
The following commit has been merged into the perf/core branch of tip:
Commit-ID: c9449c8506a5df5052ef4d17867699517b10b55a
Gitweb: https://git.kernel.org/tip/c9449c8506a5df5052ef4d17867699517b10b55a
Author: Kan Liang <kan.liang@...ux.intel.com>
AuthorDate: Thu, 27 Mar 2025 12:52:15 -07:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Tue, 08 Apr 2025 20:55:49 +02:00
perf: Extend the bit width of the arch-specific flag
The auto counter reload feature requires an event flag to indicate an
auto counter reload group, which can only be scheduled on specific
counters that enumerated in CPUID. However, the hw_perf_event.flags has
run out on X86.
Two solutions were considered to address the issue.
- Currently, 20 bits are reserved for the architecture-specific flags.
Only the bit 31 is used for the generic flag. There is still plenty
of space left. Reserve 8 more bits for the arch-specific flags.
- Add a new X86 specific hw_perf_event.flags1 to support more flags.
The former is implemented. Enough room is still left in the global
generic flag.
Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Tested-by: Thomas Falcon <thomas.falcon@...el.com>
Link: https://lkml.kernel.org/r/20250327195217.2683619-4-kan.liang@linux.intel.com
---
arch/x86/events/perf_event_flags.h | 41 ++++++++++++++---------------
include/linux/perf_event.h | 2 +-
2 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/arch/x86/events/perf_event_flags.h b/arch/x86/events/perf_event_flags.h
index 1d9e385..7007833 100644
--- a/arch/x86/events/perf_event_flags.h
+++ b/arch/x86/events/perf_event_flags.h
@@ -2,23 +2,24 @@
/*
* struct hw_perf_event.flags flags
*/
-PERF_ARCH(PEBS_LDLAT, 0x00001) /* ld+ldlat data address sampling */
-PERF_ARCH(PEBS_ST, 0x00002) /* st data address sampling */
-PERF_ARCH(PEBS_ST_HSW, 0x00004) /* haswell style datala, store */
-PERF_ARCH(PEBS_LD_HSW, 0x00008) /* haswell style datala, load */
-PERF_ARCH(PEBS_NA_HSW, 0x00010) /* haswell style datala, unknown */
-PERF_ARCH(EXCL, 0x00020) /* HT exclusivity on counter */
-PERF_ARCH(DYNAMIC, 0x00040) /* dynamic alloc'd constraint */
-PERF_ARCH(PEBS_CNTR, 0x00080) /* PEBS counters snapshot */
-PERF_ARCH(EXCL_ACCT, 0x00100) /* accounted EXCL event */
-PERF_ARCH(AUTO_RELOAD, 0x00200) /* use PEBS auto-reload */
-PERF_ARCH(LARGE_PEBS, 0x00400) /* use large PEBS */
-PERF_ARCH(PEBS_VIA_PT, 0x00800) /* use PT buffer for PEBS */
-PERF_ARCH(PAIR, 0x01000) /* Large Increment per Cycle */
-PERF_ARCH(LBR_SELECT, 0x02000) /* Save/Restore MSR_LBR_SELECT */
-PERF_ARCH(TOPDOWN, 0x04000) /* Count Topdown slots/metrics events */
-PERF_ARCH(PEBS_STLAT, 0x08000) /* st+stlat data address sampling */
-PERF_ARCH(AMD_BRS, 0x10000) /* AMD Branch Sampling */
-PERF_ARCH(PEBS_LAT_HYBRID, 0x20000) /* ld and st lat for hybrid */
-PERF_ARCH(NEEDS_BRANCH_STACK, 0x40000) /* require branch stack setup */
-PERF_ARCH(BRANCH_COUNTERS, 0x80000) /* logs the counters in the extra space of each branch */
+PERF_ARCH(PEBS_LDLAT, 0x0000001) /* ld+ldlat data address sampling */
+PERF_ARCH(PEBS_ST, 0x0000002) /* st data address sampling */
+PERF_ARCH(PEBS_ST_HSW, 0x0000004) /* haswell style datala, store */
+PERF_ARCH(PEBS_LD_HSW, 0x0000008) /* haswell style datala, load */
+PERF_ARCH(PEBS_NA_HSW, 0x0000010) /* haswell style datala, unknown */
+PERF_ARCH(EXCL, 0x0000020) /* HT exclusivity on counter */
+PERF_ARCH(DYNAMIC, 0x0000040) /* dynamic alloc'd constraint */
+PERF_ARCH(PEBS_CNTR, 0x0000080) /* PEBS counters snapshot */
+PERF_ARCH(EXCL_ACCT, 0x0000100) /* accounted EXCL event */
+PERF_ARCH(AUTO_RELOAD, 0x0000200) /* use PEBS auto-reload */
+PERF_ARCH(LARGE_PEBS, 0x0000400) /* use large PEBS */
+PERF_ARCH(PEBS_VIA_PT, 0x0000800) /* use PT buffer for PEBS */
+PERF_ARCH(PAIR, 0x0001000) /* Large Increment per Cycle */
+PERF_ARCH(LBR_SELECT, 0x0002000) /* Save/Restore MSR_LBR_SELECT */
+PERF_ARCH(TOPDOWN, 0x0004000) /* Count Topdown slots/metrics events */
+PERF_ARCH(PEBS_STLAT, 0x0008000) /* st+stlat data address sampling */
+PERF_ARCH(AMD_BRS, 0x0010000) /* AMD Branch Sampling */
+PERF_ARCH(PEBS_LAT_HYBRID, 0x0020000) /* ld and st lat for hybrid */
+PERF_ARCH(NEEDS_BRANCH_STACK, 0x0040000) /* require branch stack setup */
+PERF_ARCH(BRANCH_COUNTERS, 0x0080000) /* logs the counters in the extra space of each branch */
+PERF_ARCH(ACR, 0x0100000) /* Auto counter reload */
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 54dad17..5c54732 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -144,7 +144,7 @@ struct hw_perf_event_extra {
* PERF_EVENT_FLAG_ARCH bits are reserved for architecture-specific
* usage.
*/
-#define PERF_EVENT_FLAG_ARCH 0x000fffff
+#define PERF_EVENT_FLAG_ARCH 0x0fffffff
#define PERF_EVENT_FLAG_USER_READ_CNT 0x80000000
static_assert((PERF_EVENT_FLAG_USER_READ_CNT & PERF_EVENT_FLAG_ARCH) == 0);
Powered by blists - more mailing lists