[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875yrj8acq.mognet@arm.com>
Date: Mon, 20 Dec 2021 17:17:09 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Vincent Donnefort <vincent.donnefort@....com>,
peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org
Cc: linux-kernel@...r.kernel.org, dietmar.eggemann@....com,
morten.rasmussen@....com, qperret@...gle.com,
Vincent Donnefort <vincent.donnefort@....com>
Subject: Re: [PATCH 1/3] sched/fair: Make cpu_overutilized() EAS dependent
On 20/12/21 12:43, Vincent Donnefort wrote:
> On a system with Energy Aware Scheduling (EAS), tasks are placed according
> to their energy consumption estimation and load balancing is disabled to
> not break that energy biased placement. If the system becomes
> overutilized, i.e. one of the CPU has too much utilization, energy
> placement would then be disabled, in favor of Capacity-Aware Scheduling
> (CAS), including load balancing. This is the sole usage for
> rd->overutilized. Hence, there is no need to raise it for !EAS systems.
>
> Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator")
I'm not sure a Fixes: is warranted, this does not fix any misbehaviour or
performance regression (even if this might gain us a few extra IPS by not
writing 1's to rd->overutilized on SMP systems, note that this still gives
us writes of 0's).
Regardless:
Reviewed-by: Valentin Schneider <valentin.schneider@....com>
> Signed-off-by: Vincent Donnefort <vincent.donnefort@....com>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 095b0aa378df..e2f6fa14e5e7 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5511,7 +5511,8 @@ static inline void hrtick_update(struct rq *rq)
> #ifdef CONFIG_SMP
> static inline bool cpu_overutilized(int cpu)
> {
> - return !fits_capacity(cpu_util_cfs(cpu), capacity_of(cpu));
> + return sched_energy_enabled() &&
> + !fits_capacity(cpu_util_cfs(cpu), capacity_of(cpu));
> }
>
> static inline void update_overutilized_status(struct rq *rq)
> --
> 2.25.1
Powered by blists - more mailing lists