[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDh_aQn15to7E9JypVXarFVcEL+jiWJMV6J7-Gijj9SyQ@mail.gmail.com>
Date: Wed, 3 May 2023 18:08:54 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Qais Yousef <qyousef@...alina.io>,
Kajetan Puchalski <kajetan.puchalski@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Vincent Donnefort <vdonnefort@...gle.com>,
Quentin Perret <qperret@...gle.com>,
Abhijeet Dharmapurikar <adharmap@...cinc.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] sched: Consider CPU contention in frequency &
load-balance busiest CPU selection
On Thu, 6 Apr 2023 at 17:50, Dietmar Eggemann <dietmar.eggemann@....com> wrote:
>
> Use new cpu_boosted_util_cfs() instead of cpu_util_cfs().
>
> The former returns max(util_avg, runnable_avg) capped by max CPU
> capacity. CPU contention is thereby considered through runnable_avg.
>
> The change in load-balance only affects migration type `migrate_util`.
would be good to get some figures to show the benefit
>
> Suggested-by: Vincent Guittot <vincent.guittot@...aro.org>
> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@....com>
> ---
> kernel/sched/cpufreq_schedutil.c | 3 ++-
> kernel/sched/fair.c | 2 +-
> kernel/sched/sched.h | 19 +++++++++++++++++++
> 3 files changed, 22 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index e3211455b203..728b186cd367 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -158,7 +158,8 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu)
> struct rq *rq = cpu_rq(sg_cpu->cpu);
>
> sg_cpu->bw_dl = cpu_bw_dl(rq);
> - sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(sg_cpu->cpu),
> + sg_cpu->util = effective_cpu_util(sg_cpu->cpu,
> + cpu_boosted_util_cfs(sg_cpu->cpu),
Shouldn't we have a similar change in feec to estimate correctly which
OPP/ freq will be selected by schedutil ?
> FREQUENCY_UTIL, NULL);
> }
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bc358dc4faeb..5ae36224a1c2 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10481,7 +10481,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
> break;
>
> case migrate_util:
> - util = cpu_util_cfs(i);
> + util = cpu_boosted_util_cfs(i);
>
> /*
> * Don't try to pull utilization from a CPU with one
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 060616944d7a..f42c859579d9 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2994,6 +2994,25 @@ static inline unsigned long cpu_util_cfs(int cpu)
> return min(util, capacity_orig_of(cpu));
> }
>
> +/*
> + * cpu_boosted_util_cfs() - Estimates the amount of CPU capacity used by
> + * CFS tasks.
> + *
> + * Similar to cpu_util_cfs() but also take possible CPU contention into
> + * consideration.
> + */
> +static inline unsigned long cpu_boosted_util_cfs(int cpu)
> +{
> + unsigned long runnable;
> + struct cfs_rq *cfs_rq;
> +
> + cfs_rq = &cpu_rq(cpu)->cfs;
> + runnable = READ_ONCE(cfs_rq->avg.runnable_avg);
> + runnable = min(runnable, capacity_orig_of(cpu));
> +
> + return max(cpu_util_cfs(cpu), runnable);
> +}
> +
> static inline unsigned long cpu_util_rt(struct rq *rq)
> {
> return READ_ONCE(rq->avg_rt.util_avg);
> --
> 2.25.1
>
Powered by blists - more mailing lists