lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <62c13793-f4b4-4e2e-b6bc-0de2427ea93e@linux.intel.com>
Date: Wed, 15 Jan 2025 14:13:51 -0500
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Ravi Bangoria <ravi.bangoria@....com>, peterz@...radead.org
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
 eranian@...gle.com, irogers@...gle.com, bp@...en8.de, x86@...nel.org,
 linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
 santosh.shukla@....com, ananth.narayan@....com, sandipan.das@....com
Subject: Re: [UNTESTED][PATCH] perf/x86: Fix limit_period() for 'freq mode
 events'



On 2025-01-15 10:49 a.m., Ravi Bangoria wrote:
> For the freq mode event ...
> 
> event->attr.sample_period contains sampling freq, not the sample period.
> So, use actual sample period (event->hw.sample_period) while calling
> limit_period().
> 
> Kernel dynamically adjusts event sample period after every sample to meet
> the desire sampling freq. For this, kernel starts with sample period = 1
> and gradually increase it. Instead of simply returning error, start
> calibrating freq with the minimum sample period provided by limit_period().
> 
> Similarly, value provided along with ioctl(PERF_EVENT_IOC_PERIOD) contains
> new freq not the sample period. Avoid calling limit_period() for freq mode
> event in the ioctl() code path.
> 
> Signed-off-by: Ravi Bangoria <ravi.bangoria@....com>
> ---
> UNTESTED: limit_period() is mostly defined by Intel PMUs and I don't have
> any of those test machines.
> 
>  arch/x86/events/core.c | 23 +++++++++++++++++++----
>  1 file changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index c75c482d4c52..924aa35676d3 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -629,10 +629,22 @@ int x86_pmu_hw_config(struct perf_event *event)
>  		event->hw.config |= x86_pmu_get_event_config(event);
>  
>  	if (event->attr.sample_period && x86_pmu.limit_period) {
> -		s64 left = event->attr.sample_period;
> -		x86_pmu.limit_period(event, &left);
> -		if (left > event->attr.sample_period)
> -			return -EINVAL;
> +		if (event->attr.freq) {
> +			s64 left = event->hw.sample_period;
> +
> +			x86_pmu.limit_period(event, &left);
> +			if (left != event->hw.sample_period) {
> +				event->hw.sample_period = left;
> +				event->hw.last_period = left;
> +				local64_set(&event->hw.period_left, left);
> +			}

For a better start period, I'd prefer the below patch.
https://lore.kernel.org/lkml/20241022130414.2493923-1-kan.liang@linux.intel.com/

The limit_period() check was introduced in the c46e665f0377 ("perf/x86:
Add INST_RETIRED.ALL workarounds"). For my understanding, it's to check
the !freq case. If so, I'm thinking something as below.

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 79a4aad5a0a3..6467ecc65486 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -630,7 +630,7 @@ int x86_pmu_hw_config(struct perf_event *event)
 	if (event->attr.type == event->pmu->type)
 		event->hw.config |= x86_pmu_get_event_config(event);

-	if (event->attr.sample_period && x86_pmu.limit_period) {
+	if (!event->attr.freq && x86_pmu.limit_period) {
 		s64 left = event->attr.sample_period;
 		x86_pmu.limit_period(event, &left);
 		if (left > event->attr.sample_period)


> +		} else {
> +			s64 left = event->attr.sample_period;
> +
> +			x86_pmu.limit_period(event, &left);
> +			if (left > event->attr.sample_period)
> +				return -EINVAL;
> +		}
>  	}
>  
>  	/* sample_regs_user never support XMM registers */
> @@ -2648,6 +2660,9 @@ static int x86_pmu_check_period(struct perf_event *event, u64 value)
>  	if (x86_pmu.check_period && x86_pmu.check_period(event, value))
>  		return -EINVAL;
>  
> +	if (event->attr.freq)
> +		return 0;
> +

The ioctl(PERF_EVENT_IOC_PERIOD) can be used to set both freq and
period. But according to the implementation, yes, the
perf_event_check_period() should be only for the !freq mode.

If so, we may change the generic code.

diff --git a/kernel/events/core.c b/kernel/events/core.c
index f91ba29048ce..a9a04d4f3619 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5960,14 +5960,15 @@ static int _perf_event_period(struct perf_event
*event, u64 value)
 	if (!value)
 		return -EINVAL;

-	if (event->attr.freq && value > sysctl_perf_event_sample_rate)
-		return -EINVAL;
-
-	if (perf_event_check_period(event, value))
-		return -EINVAL;
-
-	if (!event->attr.freq && (value & (1ULL << 63)))
-		return -EINVAL;
+	if (event->attr.freq) {
+		if (value > sysctl_perf_event_sample_rate)
+			return -EINVAL;
+	} else {
+		if (perf_event_check_period(event, value))
+			return -EINVAL;
+		if (value & (1ULL << 63))
+			return -EINVAL;
+	}

 	event_function_call(event, __perf_event_period, &value);

Thanks,
Kan

>  	if (value && x86_pmu.limit_period) {
>  		s64 left = value;
>  		x86_pmu.limit_period(event, &left);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ