[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6ec54a8f-a602-4f33-96ce-0204f07046e1@nvidia.com>
Date: Wed, 14 Feb 2024 17:12:13 +0000
From: Jon Hunter <jonathanh@...dia.com>
To: Vincent Guittot <vincent.guittot@...aro.org>, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, wkarny@...il.com,
torvalds@...ux-foundation.org, qyousef@...alina.io, tglx@...utronix.de,
rafael@...nel.org, viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
Thierry Reding <treding@...dia.com>, Sasha Levin <sashal@...dia.com>,
Laxman Dewangan <ldewangan@...dia.com>,
Shardar Mohammed <smohammed@...dia.com>
Subject: Re: [PATCH] sched/fair: Fix frequency selection for non invariant
case
Hi Vincent,
On 14/01/2024 18:36, Vincent Guittot wrote:
> When frequency invariance is not enabled, get_capacity_ref_freq(policy)
> returns the current frequency and the performance margin applied by
> map_util_perf(), enabled the utilization to go above the maximum compute
> capacity and to select a higher frequency than the current one.
>
> The performance margin is now applied earlier in the path to take into
> account some utilization clampings and we can't get an utilization higher
> than the maximum compute capacity.
>
> We must use a frequency above the current frequency to get a chance to
> select a higher OPP when the current one becomes fully used. Apply
> the same margin and returns a frequency 25% higher than the current one in
> order to switch to the next OPP before we fully use the cpu at the current
> one.
>
> Reported-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Closes: https://lore.kernel.org/lkml/CAHk-=wgWcYX2oXKtgvNN2LLDXP7kXkbo-xTfumEjmPbjSer2RQ@mail.gmail.com/
> Reported-by: Wyes Karny <wkarny@...il.com>
> Closes: https://lore.kernel.org/lkml/20240114091240.xzdvqk75ifgfj5yx@wyes-pc/
> Fixes: 9c0b4bb7f630 ("sched/cpufreq: Rework schedutil governor performance estimation")
> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> Tested-by: Wyes Karny <wkarny@...il.com>
> ---
> kernel/sched/cpufreq_schedutil.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 95c3c097083e..d12e95d30e2e 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -133,7 +133,11 @@ unsigned long get_capacity_ref_freq(struct cpufreq_policy *policy)
> if (arch_scale_freq_invariant())
> return policy->cpuinfo.max_freq;
>
> - return policy->cur;
> + /*
> + * Apply a 25% margin so that we select a higher frequency than
> + * the current one before the CPU is full busy
> + */
> + return policy->cur + (policy->cur >> 2);
> }
>
> /**
We have also observed a performance degradation on our Tegra platforms
with v6.8-rc1. Unfortunately, the above change does not fix the problem
for us and we are still seeing a performance issue with v6.8-rc4. For
example, running Dhrystone on Tegra234 I am seeing the following ...
Linux v6.7:
[ 2216.301949] CPU0: Dhrystones per Second: 31976326 (18199 DMIPS)
[ 2220.993877] CPU1: Dhrystones per Second: 49568123 (28211 DMIPS)
[ 2225.685280] CPU2: Dhrystones per Second: 49568123 (28211 DMIPS)
[ 2230.364423] CPU3: Dhrystones per Second: 49632220 (28248 DMIPS)
Linux v6.8-rc4:
[ 44.661686] CPU0: Dhrystones per Second: 16068483 (9145 DMIPS)
[ 51.895107] CPU1: Dhrystones per Second: 16077457 (9150 DMIPS)
[ 59.105410] CPU2: Dhrystones per Second: 16095436 (9160 DMIPS)
[ 66.333297] CPU3: Dhrystones per Second: 16064000 (9142 DMIPS)
If I revert this change and the following ...
b3edde44e5d4 ("cpufreq/schedutil: Use a fixed reference frequency")
f12560779f9d ("sched/cpufreq: Rework iowait boost")
9c0b4bb7f630 ("sched/cpufreq: Rework schedutil governor
.. then the perf is similar to where it was ...
Linux v6.8-rc4 plus reverts:
[ 31.768189] CPU0: Dhrystones per Second: 48421678 (27559 DMIPS)
[ 36.556838] CPU1: Dhrystones per Second: 48401324 (27547 DMIPS)
[ 41.343343] CPU2: Dhrystones per Second: 48421678 (27559 DMIPS)
[ 46.163347] CPU3: Dhrystones per Second: 47679814 (27137 DMIPS)
All CPUs are running with the schedutil CPUFREQ governor. We have not
looked any further but wanted to report this in case you have any more
thoughts or suggestions for us to try.
Thanks
Jon
--
nvpublic
Powered by blists - more mailing lists