[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0hz8nuwAtExnRvQ7uk46dHRhdYwNZUaYuGsZBu8vv7V=Q@mail.gmail.com>
Date: Wed, 7 Feb 2018 10:19:03 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Patrick Bellasi <patrick.bellasi@....com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux PM <linux-pm@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Paul Turner <pjt@...gle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Todd Kjos <tkjos@...roid.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>
Subject: Re: [PATCH v4 3/3] sched/cpufreq_schedutil: use util_est for OPP selection
On Tue, Feb 6, 2018 at 3:41 PM, Patrick Bellasi <patrick.bellasi@....com> wrote:
> When schedutil looks at the CPU utilization, the current PELT value for
> that CPU is returned straight away. In certain scenarios this can have
> undesired side effects and delays on frequency selection.
>
> For example, since the task utilization is decayed at wakeup time, a
> long sleeping big task newly enqueued does not add immediately a
> significant contribution to the target CPU. This introduces some latency
> before schedutil will be able to detect the best frequency required by
> that task.
>
> Moreover, the PELT signal build-up time is a function of the current
> frequency, because of the scale invariant load tracking support. Thus,
> starting from a lower frequency, the utilization build-up time will
> increase even more and further delays the selection of the actual
> frequency which better serves the task requirements.
>
> In order to reduce this kind of latencies, we integrate the usage
> of the CPU's estimated utilization in the sugov_get_util function.
> This allows to properly consider the expected utilization of a CPU which,
> for example, has just got a big task running after a long sleep period.
> Ultimately this allows to select the best frequency to run a task
> right after its wake-up.
>
> Signed-off-by: Patrick Bellasi <patrick.bellasi@....com>
> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> Cc: Viresh Kumar <viresh.kumar@...aro.org>
> Cc: Paul Turner <pjt@...gle.com>
> Cc: Vincent Guittot <vincent.guittot@...aro.org>
> Cc: Morten Rasmussen <morten.rasmussen@....com>
> Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-pm@...r.kernel.org
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> ---
> Changes in v4:
> - rebased on today's tip/sched/core (commit 460e8c3340a2)
> - use util_est.enqueued for cfs_rq's util_est (Joel)
> - simplify cpu_util_cfs() integration (Dietmar)
>
> Changes in v3:
> - rebase on today's tip/sched/core (commit 07881166a892)
> - moved into Juri's cpu_util_cfs(), which should also
> address Rafael's suggestion to use a local variable.
>
> Changes in v2:
> - rebase on top of v4.15-rc2
> - tested that overhauled PELT code does not affect the util_est
>
> Change-Id: I62c01ed90d8ad45b06383be03d39fcf8c9041646
> ---
> kernel/sched/sched.h | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2e95505e23c6..f3c7b6a83ef4 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2127,7 +2127,12 @@ static inline unsigned long cpu_util_dl(struct rq *rq)
>
> static inline unsigned long cpu_util_cfs(struct rq *rq)
> {
> - return rq->cfs.avg.util_avg;
> + if (!sched_feat(UTIL_EST))
> + return rq->cfs.avg.util_avg;
> +
> + return max_t(unsigned long,
> + rq->cfs.avg.util_avg,
> + rq->cfs.avg.util_est.enqueued);
> }
>
> #endif
> --
> 2.15.1
>
Powered by blists - more mailing lists