lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <86o6sv6n94.wl-maz@kernel.org>
Date: Mon, 04 Aug 2025 17:54:31 +0100
From: Marc Zyngier <maz@...nel.org>
To: "Rafael J. Wysocki" <rjw@...ysocki.net>
Cc: Linux PM <linux-pm@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Christian Loehle <christian.loehle@....com>,
	Artem Bityutskiy <artem.bityutskiy@...ux.intel.com>,
	Aboorva Devarajan <aboorvad@...ux.ibm.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Mark Rutland <mark.rutland@....com>
Subject: Re: [RFT][PATCH v1 5/5] cpuidle: menu: Avoid discarding useful information

[+ Thomas, Mark]

On Thu, 06 Feb 2025 14:29:05 +0000,
"Rafael J. Wysocki" <rjw@...ysocki.net> wrote:
> 
> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> 
> When giving up on making a high-confidence prediction,
> get_typical_interval() always returns UINT_MAX which means that the
> next idle interval prediction will be based entirely on the time till
> the next timer.  However, the information represented by the most
> recent intervals may not be completely useless in those cases.
> 
> Namely, the largest recent idle interval is an upper bound on the
> recently observed idle duration, so it is reasonable to assume that
> the next idle duration is unlikely to exceed it.  Moreover, this is
> still true after eliminating the suspected outliers if the sample
> set still under consideration is at least as large as 50% of the
> maximum sample set size.
> 
> Accordingly, make get_typical_interval() return the current maximum
> recent interval value in that case instead of UINT_MAX.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> ---
>  drivers/cpuidle/governors/menu.c |   13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> --- a/drivers/cpuidle/governors/menu.c
> +++ b/drivers/cpuidle/governors/menu.c
> @@ -190,8 +190,19 @@
>  	 * This can deal with workloads that have long pauses interspersed
>  	 * with sporadic activity with a bunch of short pauses.
>  	 */
> -	if ((divisor * 4) <= INTERVALS * 3)
> +	if (divisor * 4 <= INTERVALS * 3) {
> +		/*
> +		 * If there are sufficiently many data points still under
> +		 * consideration after the outliers have been eliminated,
> +		 * returning without a prediction would be a mistake because it
> +		 * is likely that the next interval will not exceed the current
> +		 * maximum, so return the latter in that case.
> +		 */
> +		if (divisor >= INTERVALS / 2)
> +			return max;
> +
>  		return UINT_MAX;
> +	}
>  
>  	/* Update the thresholds for the next round. */
>  	if (avg - min > max - avg)

It appears that this patch, which made it in 6.15, results in *a lot*
of extra interrupts on one of my arm64 test machines.

* Without this patch:

maz@...-leg-emma:~$ vmstat -y 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 65370828  29244 106088    0    0     0     0   66   26  0  0 100  0  0
 1  0      0 65370828  29244 106088    0    0     0     0  103   66  0  0 100  0  0
 1  0      0 65370828  29244 106088    0    0     0     0   34   12  0  0 100  0  0
 1  0      0 65370828  29244 106088    0    0     0     0   25   12  0  0 100  0  0
 1  0      0 65370828  29244 106088    0    0     0     0   28   14  0  0 100  0  0

we're idling at only a few interrupts per second, which isn't bad for
a 24 CPU toy.

* With this patch:

maz@...-leg-emma:~$ vmstat -y 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 65361024  28420 105388    0    0     0     0 3710   27  0  0 100  0  0
 1  0      0 65361024  28420 105388    0    0     0     0 3399   20  0  0 100  0  0
 1  0      0 65361024  28420 105388    0    0     0     0 4439   78  0  0 100  0  0
 1  0      0 65361024  28420 105388    0    0     0     0 5634   14  0  0 100  0  0
 1  0      0 65361024  28420 105388    0    0     0     0 5575   14  0  0 100  0  0

we're idling at anywhere between 3k and 6k interrupts per second. Not
exactly what you want. This appears to be caused by the broadcast
timer IPI.

Reverting this patch on top of 6.16 restores sanity on this machine.

I suspect that we're entering some deep idle state in a much more
aggressive way, leading to a global timer firing as a wake-up
mechanism, and the broadcast IPI being used to kick everybody else
back. This is further confirmed by seeing the broadcast IPI almost
disappearing completely if I load the system a bit.

Daniel, you should be able to reproduce this on a Synquacer box (this
what I used here).

I'm happy to test things that could help restore some sanity.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ