[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtB5hqAeJ-0T00azkfC-oYET20aJ8Gq69OrS+1caCgErtg@mail.gmail.com>
Date: Tue, 16 Jan 2024 11:04:10 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Ingo Molnar <mingo@...nel.org>
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com, wkarny@...il.com,
torvalds@...ux-foundation.org, qyousef@...alina.io, tglx@...utronix.de,
rafael@...nel.org, viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org
Subject: Re: [PATCH] sched/fair: Fix frequency selection for non invariant case
On Tue, 16 Jan 2024 at 10:59, Ingo Molnar <mingo@...nel.org> wrote:
>
>
> * Vincent Guittot <vincent.guittot@...aro.org> wrote:
>
> > When frequency invariance is not enabled, get_capacity_ref_freq(policy)
> > returns the current frequency and the performance margin applied by
> > map_util_perf(), enabled the utilization to go above the maximum compute
> > capacity and to select a higher frequency than the current one.
> >
> > The performance margin is now applied earlier in the path to take into
> > account some utilization clampings and we can't get an utilization higher
> > than the maximum compute capacity.
> >
> > We must use a frequency above the current frequency to get a chance to
> > select a higher OPP when the current one becomes fully used. Apply
> > the same margin and returns a frequency 25% higher than the current one in
> > order to switch to the next OPP before we fully use the cpu at the current
> > one.
> >
> > Reported-by: Linus Torvalds <torvalds@...ux-foundation.org>
> > Closes: https://lore.kernel.org/lkml/CAHk-=wgWcYX2oXKtgvNN2LLDXP7kXkbo-xTfumEjmPbjSer2RQ@mail.gmail.com/
> > Reported-by: Wyes Karny <wkarny@...il.com>
> > Closes: https://lore.kernel.org/lkml/20240114091240.xzdvqk75ifgfj5yx@wyes-pc/
> > Fixes: 9c0b4bb7f630 ("sched/cpufreq: Rework schedutil governor performance estimation")
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > Tested-by: Wyes Karny <wkarny@...il.com>
> > ---
> > kernel/sched/cpufreq_schedutil.c | 6 +++++-
> > 1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> > index 95c3c097083e..d12e95d30e2e 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -133,7 +133,11 @@ unsigned long get_capacity_ref_freq(struct cpufreq_policy *policy)
> > if (arch_scale_freq_invariant())
> > return policy->cpuinfo.max_freq;
> >
> > - return policy->cur;
> > + /*
> > + * Apply a 25% margin so that we select a higher frequency than
> > + * the current one before the CPU is full busy
> > + */
> > + return policy->cur + (policy->cur >> 2);
> > }
>
> I've updated the changelog to better express what was broken and how we
> fixed it. Ack?
Looks good
Thanks
>
> Ingo
>
> ==========================>
> From: Vincent Guittot <vincent.guittot@...aro.org>
> Date: Sun, 14 Jan 2024 19:36:00 +0100
> Subject: [PATCH] sched/fair: Fix frequency selection for non-invariant case
>
> Linus reported a ~50% performance regression on single-threaded
> workloads on his AMD Ryzen system, and bisected it to:
>
> 9c0b4bb7f630 ("sched/cpufreq: Rework schedutil governor performance estimation")
>
> When frequency invariance is not enabled, get_capacity_ref_freq(policy)
> is supposed to return the current frequency and the performance margin
> applied by map_util_perf(), enabling the utilization to go above the
> maximum compute capacity and to select a higher frequency than the current one.
>
> After the changes in 9c0b4bb7f630, the performance margin was applied
> earlier in the path to take into account utilization clampings and
> we couldn't get a utilization higher than the maximum compute capacity,
> and the CPU remained 'stuck' at lower frequencies.
>
> To fix this, we must use a frequency above the current frequency to
> get a chance to select a higher OPP when the current one becomes fully used.
> Apply the same margin and return a frequency 25% higher than the current
> one in order to switch to the next OPP before we fully use the CPU
> at the current one.
>
> [ mingo: Clarified the changelog. ]
>
> Fixes: 9c0b4bb7f630 ("sched/cpufreq: Rework schedutil governor performance estimation")
> Reported-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Bisected-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Reported-by: Wyes Karny <wkarny@...il.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> Signed-off-by: Ingo Molnar <mingo@...nel.org>
> Tested-by: Wyes Karny <wkarny@...il.com>
> Link: https://lore.kernel.org/r/20240114183600.135316-1-vincent.guittot@linaro.org
> ---
> kernel/sched/cpufreq_schedutil.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 95c3c097083e..eece6244f9d2 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -133,7 +133,11 @@ unsigned long get_capacity_ref_freq(struct cpufreq_policy *policy)
> if (arch_scale_freq_invariant())
> return policy->cpuinfo.max_freq;
>
> - return policy->cur;
> + /*
> + * Apply a 25% margin so that we select a higher frequency than
> + * the current one before the CPU is fully busy:
> + */
> + return policy->cur + (policy->cur >> 2);
> }
>
> /**
Powered by blists - more mailing lists