[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240111204348.669673-2-namhyung@kernel.org>
Date: Thu, 11 Jan 2024 12:43:48 -0800
From: namhyung@...nel.org
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>
Cc: Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Mingwei Zhang <mizhang@...gle.com>,
Namhyung Kim <namhyung@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH v2 2/2] perf/core: Reduce PMU access to adjust sample freq
From: Namhyung Kim <namhyung@...nel.org>
For throttled events, it first starts the event and then stop
unnecessarily. As it's already stopped, it can directly adjust
the frequency and then move on.
Reviewed-by: Ian Rogers <irogers@...gle.com>
Reviewed-by: Kan Liang <kan.liang@...ux.intel.com>
Signed-off-by: Namhyung Kim <namhyung@...nel.org>
---
kernel/events/core.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index e9ce79c8c145..243cd09ba7fe 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4121,10 +4121,15 @@ static void perf_adjust_freq_unthr_events(struct list_head *event_list)
if (hwc->interrupts == MAX_INTERRUPTS) {
hwc->interrupts = 0;
perf_log_throttle(event, 1);
- event->pmu->start(event, 0);
- }
- if (!event->attr.freq || !event->attr.sample_freq)
+ if (!event->attr.freq || !event->attr.sample_freq) {
+ delta = 0;
+ goto next;
+ }
+
+ if (event->hw.state & PERF_HES_STOPPED)
+ goto adjust;
+ } else if (!event->attr.freq || !event->attr.sample_freq)
continue;
/*
@@ -4132,6 +4137,7 @@ static void perf_adjust_freq_unthr_events(struct list_head *event_list)
*/
event->pmu->stop(event, PERF_EF_UPDATE);
+adjust:
now = local64_read(&event->count);
delta = now - hwc->freq_count_stamp;
hwc->freq_count_stamp = now;
@@ -4146,6 +4152,7 @@ static void perf_adjust_freq_unthr_events(struct list_head *event_list)
if (delta > 0)
perf_adjust_period(event, period, delta, false);
+next:
event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0);
}
}
--
2.43.0.275.g3460e3d667-goog
Powered by blists - more mailing lists