[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDVGcvmR5BoJpyoOBE19PcWZP+6NjSD7MnJyBAc7VMnmg@mail.gmail.com>
Date: Wed, 1 Mar 2023 11:39:06 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Qais Yousef <qyousef@...alina.io>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Peter Zijlstra <peterz@...radead.org>,
Kajetan Puchalski <kajetan.puchalski@....com>,
Jian-Min Liu <jian-min.liu@...iatek.com>,
Ingo Molnar <mingo@...nel.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Vincent Donnefort <vdonnefort@...gle.com>,
Quentin Perret <qperret@...gle.com>,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Abhijeet Dharmapurikar <adharmap@...cinc.com>,
Qais Yousef <qais.yousef@....com>,
linux-kernel@...r.kernel.org,
Jonathan JMChen <jonathan.jmchen@...iatek.com>
Subject: Re: [RFC PATCH 0/1] sched/pelt: Change PELT halflife at runtime
On Thu, 23 Feb 2023 at 16:37, Qais Yousef <qyousef@...alina.io> wrote:
>
> On 02/09/23 17:16, Vincent Guittot wrote:
>
> > I don't see how util_est_faster can help this 1ms task here ? It's
> > most probably never be preempted during this 1ms. For such an Android
> > Graphics Pipeline short task, hasn't uclamp_min been designed for and
> > a better solution ?
>
> uclamp_min is being used in UI and helping there. But your mileage might vary
> with adoption still.
>
> The major motivation behind this is to help things like gaming as the original
> thread started. It can help UI and other use cases too. Android framework has
> a lot of context on the type of workload that can help it make a decision when
> this helps. And OEMs can have the chance to tune and apply based on the
> characteristics of their device.
>
> > IIUC how util_est_faster works, it removes the waiting time when
> > sharing cpu time with other tasks. So as long as there is no (runnable
> > but not running time), the result is the same as current util_est.
> > util_est_faster makes a difference only when the task alternates
> > between runnable and running slices.
> > Have you considered using runnable_avg metrics in the increase of cpu
> > freq ? This takes into the runnable slice and not only the running
> > time and increase faster than util_avg when tasks compete for the same
> > CPU
>
> Just to understand why we're heading into this direction now.
>
> AFAIU the desired outcome to have faster rampup time (and on HMP faster up
> migration) which both are tied to utilization signal.
>
> Wouldn't make the util response time faster help not just for rampup, but
> rampdown too?
>
> If we improve util response time, couldn't this mean we can remove util_est or
> am I missing something?
not sure because you still have a ramping step whereas util_est
directly gives you the final tager
>
> Currently we have util response which is tweaked by util_est and then that is
> tweaked further by schedutil with that 25% margin when maping util to
> frequency.
the 25% is not related to the ramping time but to the fact that you
always need some margin to cover unexpected events and estimation
error
>
> I think if we can allow improving general util response time by tweaking PELT
> HALFLIFE we can potentially remove util_est and potentially that magic 25%
> margin too.
>
> Why the approach of further tweaking util_est is better?
note that in this case it doesn't really tweak util_est but Dietmar
has taken into account runnable_avg to increase the freq in case of
contention
Also IIUC Dietmar's results, the problem seems more linked to the
selection of a higher freq than increasing the utilization;
runnable_avg tests give similar perf results than shorter half life
and better power consumption.
>
> Recently phoronix reported that schedutil behavior is suboptimal and I wonder
> if the response time is contributing to that
>
> https://www.phoronix.com/review/schedutil-quirky-2023
>
>
> Cheers
>
> --
> Qais Yousef
Powered by blists - more mailing lists