lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250418085743.GN38216@noisy.programming.kicks-ass.net>
Date: Fri, 18 Apr 2025 10:57:43 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Qing Wong <wangqing7171@...il.com>
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
	mark.rutland@....com, alexander.shishkin@...ux.intel.com,
	jolsa@...nel.org, irogers@...gle.com, adrian.hunter@...el.com,
	kan.liang@...ux.intel.com, linux-perf-users@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] Revert "perf/core: Fix hardlockup failure caused by
 perf throttle"

On Sat, Apr 05, 2025 at 10:16:34PM +0800, Qing Wong wrote:
> From: Qing Wang <wangqing7171@...il.com>
> 
> This reverts commit 15def34e2635ab7e0e96f1bc32e1b69609f14942.
> 
> The hardlockup failure does not exist because:
> 1. The hardlockup's watchdog event is a pinned event, which exclusively
> occupies a dedicated PMC (Performance Monitoring Counter) and is unaffected
> by PMC scheduling.
> 2. The hardware event throttling mechanism only disables the specific PMC
> where throttling occurs, without impacting other PMCs. Consequently, The
> hardlockup event's dedicated PMC remains entirely unaffected.
> 
> Signed-off-by: Qing Wang <wangqing7171@...il.com>
> ---
>  kernel/events/core.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 0bb21659e252..29cdb240e104 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -10049,8 +10049,8 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
>  		hwc->interrupts = 1;
>  	} else {
>  		hwc->interrupts++;
> -		if (unlikely(throttle &&
> -			     hwc->interrupts > max_samples_per_tick)) {
> +		if (unlikely(throttle
> +			     && hwc->interrupts >= max_samples_per_tick)) {

Well, it restores bad coding style. The referred commit also states that
max_samples_per_tick can be 1, at which point we'll always throttle the
thing, since we've just increased.

That is, the part of the old commit that argued about e050e3f0a71bf
flipped the compare and the increment is still true. So even though it
might not be related to the hardlockup problem, I still don't think the
patch was wrong.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ