lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 26 Mar 2020 17:03:10 +0100
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Aubrey Li <aubrey.li@...el.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Vineeth Pillai <vpillai@...italocean.com>,
        Joel Fernandes <joel@...lfernandes.org>,
        Aubrey Li <aubrey.li@...ux.intel.com>,
        Phil Auld <pauld@...hat.com>
Subject: Re: [PATCH] sched/fair: Fix negative imbalance in imbalance calculation

On Thu, 26 Mar 2020 at 06:53, Aubrey Li <aubrey.li@...el.com> wrote:
>
> A negative imbalance value was observed after imbalance calculation,
> this happens when the local sched group type is group_fully_busy,
> and the average load of local group is greater than the selected
> busiest group. Fix this problem by comparing the average load of the
> local and busiest group before imbalance calculation formula.
>
> Suggested-by: Vincent Guittot <vincent.guittot@...aro.org>
> Signed-off-by: Aubrey Li <aubrey.li@...ux.intel.com>
> Cc: Phil Auld <pauld@...hat.com>

Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>

> ---
>  kernel/sched/fair.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c1217bf..4a2ba3f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8761,6 +8761,14 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>
>                 sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
>                                 sds->total_capacity;
> +               /*
> +                * If the local group is more loaded than the selected
> +                * busiest group don't try to pull any tasks.
> +                */
> +               if (local->avg_load >= busiest->avg_load) {
> +                       env->imbalance = 0;
> +                       return;
> +               }
>         }
>
>         /*
> --
> 2.7.4
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ