[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250115154949.3147-1-ravi.bangoria@amd.com>
Date: Wed, 15 Jan 2025 15:49:49 +0000
From: Ravi Bangoria <ravi.bangoria@....com>
To: <peterz@...radead.org>, <kan.liang@...ux.intel.com>
CC: <ravi.bangoria@....com>, <mingo@...hat.com>, <acme@...nel.org>,
<namhyung@...nel.org>, <eranian@...gle.com>, <irogers@...gle.com>,
<bp@...en8.de>, <x86@...nel.org>, <linux-perf-users@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <santosh.shukla@....com>,
<ananth.narayan@....com>, <sandipan.das@....com>
Subject: [UNTESTED][PATCH] perf/x86: Fix limit_period() for 'freq mode events'
For the freq mode event ...
event->attr.sample_period contains sampling freq, not the sample period.
So, use actual sample period (event->hw.sample_period) while calling
limit_period().
Kernel dynamically adjusts event sample period after every sample to meet
the desire sampling freq. For this, kernel starts with sample period = 1
and gradually increase it. Instead of simply returning error, start
calibrating freq with the minimum sample period provided by limit_period().
Similarly, value provided along with ioctl(PERF_EVENT_IOC_PERIOD) contains
new freq not the sample period. Avoid calling limit_period() for freq mode
event in the ioctl() code path.
Signed-off-by: Ravi Bangoria <ravi.bangoria@....com>
---
UNTESTED: limit_period() is mostly defined by Intel PMUs and I don't have
any of those test machines.
arch/x86/events/core.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index c75c482d4c52..924aa35676d3 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -629,10 +629,22 @@ int x86_pmu_hw_config(struct perf_event *event)
event->hw.config |= x86_pmu_get_event_config(event);
if (event->attr.sample_period && x86_pmu.limit_period) {
- s64 left = event->attr.sample_period;
- x86_pmu.limit_period(event, &left);
- if (left > event->attr.sample_period)
- return -EINVAL;
+ if (event->attr.freq) {
+ s64 left = event->hw.sample_period;
+
+ x86_pmu.limit_period(event, &left);
+ if (left != event->hw.sample_period) {
+ event->hw.sample_period = left;
+ event->hw.last_period = left;
+ local64_set(&event->hw.period_left, left);
+ }
+ } else {
+ s64 left = event->attr.sample_period;
+
+ x86_pmu.limit_period(event, &left);
+ if (left > event->attr.sample_period)
+ return -EINVAL;
+ }
}
/* sample_regs_user never support XMM registers */
@@ -2648,6 +2660,9 @@ static int x86_pmu_check_period(struct perf_event *event, u64 value)
if (x86_pmu.check_period && x86_pmu.check_period(event, value))
return -EINVAL;
+ if (event->attr.freq)
+ return 0;
+
if (value && x86_pmu.limit_period) {
s64 left = value;
x86_pmu.limit_period(event, &left);
--
2.34.1
Powered by blists - more mailing lists