[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250723210426.590974-3-ysk@kzalloc.com>
Date: Wed, 23 Jul 2025 21:04:28 +0000
From: Yunseong Kim <ysk@...lloc.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
"Liang, Kan" <kan.liang@...ux.intel.com>
Cc: Will Deacon <will@...nel.org>,
Yeoreum Yun <yeoreum.yun@....com>,
Austin Kim <austindh.kim@...il.com>,
Michelle Jin <shjy180909@...il.com>,
linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org,
Yunseong Kim <ysk@...lloc.com>,
syzkaller@...glegroups.com
Subject: [PATCH] perf/core: Prevent UBSAN negative‑idx shift by throttle/unthrottle group
Where perf_event_throttle_group() and perf_event_unthrottle_group() would
invoke pmu->start()/stop() on events still in the OFF state, leaving
event->hw.idx == -1 and triggering UBSAN shift-out-of-bounds errors
(negative shift exponent) in pmu enable/disable event. By checking
'event->state > PERF_EVENT_STATE_OFF' for both the group leader and each
sibling, this ensure only started events with valid hw.idx values are passed
to the PMU, preventing negative-index shifts undefined behavior.
The issue is reproducible using the syzlang and C reproducer available:
Link: https://lore.kernel.org/lkml/14fb716a-dedf-482a-8518-e5cc26165e97@kzalloc.com/
Fixes: 9734e25fbf5a ("perf: Fix the throttle logic for a group")
Signed-off-by: Yunseong Kim <ysk@...lloc.com>
Tested-by: Yunseong Kim <ysk@...lloc.com>
Cc: Mark Rutland <mark.rutland@....com>
Cc: Yeoreum Yun <yeoreum.yun@....com>
Cc: syzkaller@...glegroups.com
---
kernel/events/core.c | 22 ++++++++++++++++------
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 22fdf0c187cd..e5cec61be545 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2684,18 +2684,28 @@ static void perf_event_unthrottle_group(struct perf_event *event, bool skip_star
{
struct perf_event *sibling, *leader = event->group_leader;
- perf_event_unthrottle(leader, skip_start_event ? leader != event : true);
- for_each_sibling_event(sibling, leader)
- perf_event_unthrottle(sibling, skip_start_event ? sibling != event : true);
+ if (leader->state > PERF_EVENT_STATE_OFF)
+ perf_event_unthrottle(leader,
+ skip_start_event ? leader != event : true);
+
+ for_each_sibling_event(sibling, leader) {
+ if (sibling->state > PERF_EVENT_STATE_OFF)
+ perf_event_unthrottle(sibling,
+ skip_start_event ? sibling != event : true);
+ }
}
static void perf_event_throttle_group(struct perf_event *event)
{
struct perf_event *sibling, *leader = event->group_leader;
- perf_event_throttle(leader);
- for_each_sibling_event(sibling, leader)
- perf_event_throttle(sibling);
+ if (leader->state > PERF_EVENT_STATE_OFF)
+ perf_event_throttle(leader);
+
+ for_each_sibling_event(sibling, leader) {
+ if (sibling->state > PERF_EVENT_STATE_OFF)
+ perf_event_throttle(sibling);
+ }
}
static int
--
2.50.0
Powered by blists - more mailing lists