[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ea9ce34f-0eab-438c-ab80-45e92a654bdc@arm.com>
Date: Wed, 18 Sep 2024 11:40:05 +0100
From: Christian Loehle <christian.loehle@....com>
To: Qais Yousef <qyousef@...alina.io>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>
Cc: Juri Lelli <juri.lelli@...hat.com>, Steven Rostedt <rostedt@...dmis.org>,
Dietmar Eggemann <dietmar.eggemann@....com>, John Stultz
<jstultz@...gle.com>, linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 14/16] sched/schedutil: Ignore dvfs headroom when util
is decaying
On 8/20/24 17:35, Qais Yousef wrote:
> It means we're being idling or doing less work and are already running
> at a higher value. No need to apply any dvfs headroom in this case.
>
> Signed-off-by: Qais Yousef <qyousef@...alina.io>
> ---
> kernel/sched/cpufreq_schedutil.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 318b09bc4ab1..4a1a8b353d51 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -9,6 +9,7 @@
> #define IOWAIT_BOOST_MIN (SCHED_CAPACITY_SCALE / 8)
>
> DEFINE_PER_CPU_READ_MOSTLY(unsigned long, response_time_mult);
> +DEFINE_PER_CPU(unsigned long, last_update_util);
>
> struct sugov_tunables {
> struct gov_attr_set attr_set;
> @@ -262,15 +263,19 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> * Also take into accounting how long tasks have been waiting in runnable but
> * !running state. If it is high, it means we need higher DVFS headroom to
> * reduce it.
> - *
> - * XXX: Should we provide headroom when the util is decaying?
> */
> static inline unsigned long sugov_apply_dvfs_headroom(unsigned long util, int cpu)
> {
> - unsigned long update_headroom, waiting_headroom;
> + unsigned long update_headroom, waiting_headroom, prev_util;
> struct rq *rq = cpu_rq(cpu);
> u64 delay;
>
> + prev_util = per_cpu(last_update_util, cpu);
> + per_cpu(last_update_util, cpu) = util;
> +
> + if (util < prev_util)
> + return util;
> +
> /*
> * What is the possible worst case scenario for updating util_avg, ctx
> * switch or TICK?
Kind of in the same vain as Sultan here, -/+1 util really doesn't tell much,
I would be wary to base any special behavior on that.
This goes for here but also for the 'periodic'-task detection in
[RFC PATCH 09/16] sched/fair: util_est: Take into account periodic tasks
in my experience as soon as we leave the world of rt-app workloads behind
these aren't stable enough on that granularity.
Powered by blists - more mailing lists