lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240205120147.ui5zab2b2j4looex@airbuntu>
Date: Mon, 5 Feb 2024 12:01:47 +0000
From: Qais Yousef <qyousef@...alina.io>
To: Christian Loehle <christian.loehle@....com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>
Subject: Re: [PATCH] cpufreq: Change default transition delay to 2ms

Hi Christian

On 02/05/24 09:17, Christian Loehle wrote:
> On 05/02/2024 02:25, Qais Yousef wrote:
> > 10ms is too high for today's hardware, even low end ones. This default
> > end up being used a lot on Arm machines at least. Pine64, mac mini and
> > pixel 6 all end up with 10ms rate_limit_us when using schedutil, and
> > it's too high for all of them.
> > 
> > Change the default to 2ms which should be 'pessimistic' enough for worst
> > case scenario, but not too high for platforms with fast DVFS hardware.
> > 
> > Signed-off-by: Qais Yousef <qyousef@...alina.io>
> > ---
> >  drivers/cpufreq/cpufreq.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
> > index 44db4f59c4cc..8207f7294cb6 100644
> > --- a/drivers/cpufreq/cpufreq.c
> > +++ b/drivers/cpufreq/cpufreq.c
> > @@ -582,11 +582,11 @@ unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy)
> >  		 * for platforms where transition_latency is in milliseconds, it
> >  		 * ends up giving unrealistic values.
> >  		 *
> > -		 * Cap the default transition delay to 10 ms, which seems to be
> > +		 * Cap the default transition delay to 2 ms, which seems to be
> >  		 * a reasonable amount of time after which we should reevaluate
> >  		 * the frequency.
> >  		 */
> > -		return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000);
> > +		return min(latency * LATENCY_MULTIPLIER, (unsigned int)(2*MSEC_PER_SEC));
> >  	}
> >  
> >  	return LATENCY_MULTIPLIER;
> 
> Hi Qais,
> as previously mentioned I'm working on improving iowait boost and while I'm not against
> this patch per se it does make iowait boosting more aggressive. ((Doubling limited by rate_limit_us)
> Since the boost is often applied when not useful (for Android e.g. periodic f2fs writebacks),
> this might have some side effects. Please give me a couple of days for verifying any impact,
> or did you do that already?

I don't understand the concern, could you elaborate more please?

Products already ship with 500us and 1ms which is lower than this 2ms.

On my AMD desktop it is already 1ms. And I think I've seen Intel systems
defaulting to 500us or something low too. Ideally cpufreq drivers should set
policy->transition_delay_us; so this path is taken if the driver didn't
populate that. Which seems to be more common than I'd like tbh.

I never run with 10ms. It's too slow. But I had several tests in the past
against 2ms posted for those margin and removal of uclamp-max aggregation
series. Anyway. I ran PCMark storage on Pixel 6 (running mainlinish kernel) and
I see

10ms: 27600
2ms: 29750

HTH

Cheers

--
Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ