[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtB72Oa8XvH_xRFiKy+bfyEKUMdndKphFAD3dixePBUohQ@mail.gmail.com>
Date: Mon, 15 Jan 2024 16:26:52 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Qais Yousef <qyousef@...alina.io>, Wyes Karny <wkarny@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>, Ingo Molnar <mingo@...nel.org>,
linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>, Valentin Schneider <vschneid@...hat.com>
Subject: Re: [GIT PULL] Scheduler changes for v6.8
On Mon, 15 Jan 2024 at 15:03, Dietmar Eggemann <dietmar.eggemann@....com> wrote:
>
> On 15/01/2024 14:26, Vincent Guittot wrote:
> > On Mon, 15 Jan 2024 at 13:09, Qais Yousef <qyousef@...alina.io> wrote:
> >>
> >> On 01/15/24 09:21, Vincent Guittot wrote:
> >>
> >>>> Or I've done the math wrong :-) But the two don't behave the same for the same
> >>>> kernel with and without CPPC.
> >>>
> >>> They will never behave the same because they can't
> >>> - with invariance, the utilization is the utilization at max capacity
> >>> so we can easily jump several OPP to go directly to the right one
> >>> - without invariance, the utilization is the utilization at current
> >>> OPP so we can only jump to a limited number of OPP
> >>
> >> I am probably missing some subtlty, but the behavior looks more sensible to
> >> me when we divide by current capacity instead of max one.
> >>
> >> It seems what you're saying is that the capacity range for each OPP is 0-1024.
> >
> > Yes that's the case when you don't have frequency invariance
> >
> >> And that's when we know that we saturated the current capacity level we decide
> >> to move on.
> >
> > yes
> >
> >>
> >> As I am trying to remove the hardcoded headroom values I am wary of another
> >> one. But it seems this is bandaid scenario anyway; so maybe I shouldn't worry
> >> too much about it.
>
> I still don't fully understand this fix.
>
> We had:
>
> sugov_update_single_freq()
>
> sugov_update_single_common()
>
> next_f = get_next_freq()
>
> freq = arch_scale_freq_invariant() ?
> policy->cpuinfo.max_freq : policy->cur (**) <- (2) !freq_inv
>
>
> util = map_util_perf(util); <- (1) util *= 1.25
>
> freq = map_util_freq(util, freq, max); <- (3)
> }
>
>
>
> And now there is:
>
> sugov_update_single_freq()
>
> sugov_update_single_common()
>
> sugov_get_util()
>
> sg_cpu->util = sugov_effective_cpu_perf()
>
> /* Add dvfs headroom to actual utilization */
> actual = map_util_perf(actual) <- (1) util *= 1.25
>
> next_f = get_next_freq()
>
> freq = get_capacity_ref_freq()
>
> return policy->cur (*) <- (2) !freq_inv
>
> freq = map_util_freq(util, freq, max) <- (3)
>
> Still not clear to me why we need this extra 'policy->cur *= 1.25' here
> (*) and not here (**)
Now, util can't be higher than max to handle clamping use cases
whereas it could be the case before. The jump to next OPP was
previously done with util being higher than max and it's now done with
freq being higher than policy->cur
>
>
Powered by blists - more mailing lists