[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCq+-U34WSUHjs3CkqQM769_Q+FN-5Y+uK=AzdB0YNiLQ@mail.gmail.com>
Date: Tue, 29 Aug 2023 16:10:46 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Qais Yousef <qyousef@...alina.io>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
linux-kernel@...r.kernel.org, Qais Yousef <qais.yousef@....com>
Subject: Re: [PATCH] sched/fair: Check a task has a fitting cpu when updating misfit
On Sun, 20 Aug 2023 at 22:34, Qais Yousef <qyousef@...alina.io> wrote:
>
> From: Qais Yousef <qais.yousef@....com>
>
> If a misfit task is affined to a subset of the possible cpus, we need to
> verify that one of these cpus can fit it. Otherwise the load balancer
> code will continuously trigger needlessly leading the balance_interval
> to increase in return and eventually end up with a situation where real
> imbalances take a long time to address because of this impossible
> imbalance situation.
>
> This can happen in Android world where it's common for background tasks
> to be restricted to little cores.
>
> Similarly if we can't fit the biggest core, triggering misfit is
> pointless as it is the best we can ever get on this system.
>
> To speed the search up, don't call task_fits_cpu() which will repeatedly
> call uclamp_eff_value() for the same task. Call util_fits_cpu() instead.
> And only do so when we see a cpu with higher capacity level than
> passed cpu_of(rq).
>
> Signed-off-by: Qais Yousef <qais.yousef@....com>
> Signed-off-by: Qais Yousef (Google) <qyousef@...alina.io>
> ---
> kernel/sched/fair.c | 50 ++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 43 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0b7445cd5af9..f08c5f3bf895 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4853,17 +4853,50 @@ static inline int task_fits_cpu(struct task_struct *p, int cpu)
>
> static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
> {
> + unsigned long uclamp_min, uclamp_max;
> + unsigned long util, cap_level;
> + bool has_fitting_cpu = false;
> + int cpu = cpu_of(rq);
> +
> if (!sched_asym_cpucap_active())
> return;
>
> - if (!p || p->nr_cpus_allowed == 1) {
> - rq->misfit_task_load = 0;
> - return;
> - }
> + if (!p || p->nr_cpus_allowed == 1)
> + goto out;
>
> - if (task_fits_cpu(p, cpu_of(rq))) {
> - rq->misfit_task_load = 0;
> - return;
> + uclamp_min = uclamp_eff_value(p, UCLAMP_MIN);
> + uclamp_max = uclamp_eff_value(p, UCLAMP_MAX);
> + util = task_util_est(p);
> +
> + if (util_fits_cpu(util, uclamp_min, uclamp_max, cpu) > 0)
> + goto out;
> +
> + cap_level = capacity_orig_of(cpu);
> +
> + /* If we can't fit the biggest CPU, that's the best we can ever get. */
> + if (cap_level == SCHED_CAPACITY_SCALE)
> + goto out;
> +
> + /*
> + * If the task affinity is not set to default, make sure it is not
> + * restricted to a subset where no CPU can ever fit it. Triggering
> + * misfit in this case is pointless as it has no where better to move
> + * to. And it can lead to balance_interval to grow too high as we'll
> + * continuously fail to move it anywhere.
> + */
> + if (!cpumask_equal(p->cpus_ptr, cpu_possible_mask)) {
> + for_each_cpu(cpu, p->cpus_ptr) {
I haven't looked at the problem in detail and at other possibilities
so far but for_each_cpu doesn't scale and update_misfit_status() being
called in pick_next_task_fair() so you must find another way to detect
this
> + if (cap_level < capacity_orig_of(cpu)) {
> + cap_level = capacity_orig_of(cpu);
> + if (util_fits_cpu(util, uclamp_min, uclamp_max, cpu) > 0) {
> + has_fitting_cpu = true;
> + break;
> + }
> + }
> + }
> +
> + if (!has_fitting_cpu)
> + goto out;
> }
>
> /*
> @@ -4871,6 +4904,9 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq)
> * task_h_load() returns 0.
> */
> rq->misfit_task_load = max_t(unsigned long, task_h_load(p), 1);
> + return;
> +out:
> + rq->misfit_task_load = 0;
> }
>
> #else /* CONFIG_SMP */
> --
> 2.34.1
>
Powered by blists - more mailing lists