[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230711154718.gudn32sru5opwvlw@airbuntu>
Date: Tue, 11 Jul 2023 16:47:18 +0100
From: Qais Yousef <qyousef@...alina.io>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/fair: remove util_est boosting
On 07/06/23 15:51, Vincent Guittot wrote:
> There is no need to use runnable_avg when estimating util_est and that
> even generates wrong behavior because one includes blocked tasks whereas
> the other one doesn't. This can lead to accounting twice the waking task p,
> once with the blocked runnable_avg and another one when adding its
> util_est.
>
> cpu's runnable_avg is already used when computing util_avg which is then
> compared with util_est.
>
> In some situation, feec will not select prev_cpu but another one on the
> same performance domain because of higher max_util
>
> Fixes: 7d0583cf9ec7 ("sched/fair, cpufreq: Introduce 'runnable boosting'")
> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> ---
Can we verify the numbers that introduced this magic boost are still valid
please?
Otherwise LGTM.
Thanks!
--
Qais Yousef
> kernel/sched/fair.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a80a73909dc2..77c9f5816c31 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7289,9 +7289,6 @@ cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost)
>
> util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
>
> - if (boost)
> - util_est = max(util_est, runnable);
> -
> /*
> * During wake-up @p isn't enqueued yet and doesn't contribute
> * to any cpu_rq(cpu)->cfs.avg.util_est.enqueued.
> --
> 2.34.1
>
Powered by blists - more mailing lists