lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aMQbIu5QNvPoAsSF@dragon>
Date: Fri, 12 Sep 2025 21:07:46 +0800
From: Shawn Guo <shawnguo2@...h.net>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Viresh Kumar <viresh.kumar@...aro.org>,
	Qais Yousef <qyousef@...alina.io>, linux-pm@...r.kernel.org,
	linux-kernel@...r.kernel.org, Shawn Guo <shawnguo@...nel.org>,
	stable@...r.kernel.org
Subject: Re: [PATCH] cpufreq: cap the default transition delay at 10 ms

On Fri, Sep 12, 2025 at 12:41:14PM +0200, Rafael J. Wysocki wrote:
> On Wed, Sep 10, 2025 at 8:53 AM Shawn Guo <shawnguo2@...h.net> wrote:
> >
> > From: Shawn Guo <shawnguo@...nel.org>
> >
> > A regression is seen with 6.6 -> 6.12 kernel upgrade on platforms where
> > cpufreq-dt driver sets cpuinfo.transition_latency as CPUFREQ_ETERNAL (-1),
> > due to that platform's DT doesn't provide the optional property
> > 'clock-latency-ns'.  The dbs sampling_rate was 10000 us on 6.6 and
> > suddently becomes 6442450 us (4294967295 / 1000 * 1.5) on 6.12 for these
> > platforms, because that the 10 ms cap for transition_delay_us was
> > accidentally dropped by the commits below.
> 
> IIRC, this was not accidental.

I could be wrong, but my understanding is that the intention of Qais's
commits is to drop 10 ms (and LATENCY_MULTIPLIER) as the *minimal* limit
on transition_delay_us, so that it's possible to get a much less
transition_delay_us on platforms like M1 mac mini where the transition
latency is just tens of us.  But it breaks platforms where 10 ms used
to be the *maximum* limit.

Even if it's intentional to remove 10 ms as both the minimal and maximum
limits, breaking some platforms must not be intentional, I guess :)

> Why do you want to address the issue in the cpufreq core instead of
> doing that in the cpufreq-dt driver?

My intuition was to fix the regression at where the regression was
introduced by recovering the code behavior.

> CPUFREQ_ETERNAL doesn't appear to be a reasonable default for
> cpuinfo.transition_latency.  Maybe just change the default there to 10
> ms?

I think cpufreq-dt is doing what it's asked to do, no?

 /*
  * Maximum transition latency is in nanoseconds - if it's unknown,
  * CPUFREQ_ETERNAL shall be used.
  */

Also, 10 ms will then be turned into 15 ms by:

	/* Give a 50% breathing room between updates */
	return latency + (latency >> 1);

Shawn


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ