lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtB2=_O=dJbTH=e_Hg80_rcSvBgwUP+ZMehfyG4sG5W6iQ@mail.gmail.com>
Date: Fri, 13 Sep 2024 18:14:26 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Pierre Gondois <pierre.gondois@....com>
Cc: linux-kernel@...r.kernel.org, stable@...r.kernel.org, 
	Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, 
	Juri Lelli <juri.lelli@...hat.com>, Dietmar Eggemann <dietmar.eggemann@....com>, 
	Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, 
	Valentin Schneider <vschneid@...hat.com>
Subject: Re: [PATCH] sched/fair: Fix integer underflow

Hi Pierre

On Fri, 13 Sept 2024 at 10:58, Pierre Gondois <pierre.gondois@....com> wrote:
>
> (struct sg_lb_stats).idle_cpus is of type 'unsigned int'.
> (local->idle_cpus - busiest->idle_cpus) can underflow to UINT_MAX
> for instance, and max_t(long, 0, UINT_MAX) will output UINT_MAX.
>
> Use lsub_positive() instead of max_t().

Have you faced the problem or this patch is based on code review ?

we have the below in sched_balance_find_src_group() that should ensure
that we have local->idle_cpus > busiest->idle_cpus

if (busiest->group_weight > 1 &&
    local->idle_cpus <= (busiest->idle_cpus + 1)) {
    /*
    * If the busiest group is not overloaded
    * and there is no imbalance between this and busiest
    * group wrt idle CPUs, it is balanced. The imbalance
    * becomes significant if the diff is greater than 1
    * otherwise we might end up to just move the imbalance
    * on another group. Of course this applies only if
    * there is more than 1 CPU per group.
    */
    goto out_balanced;
}

>
> Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
> cc: stable@...r.kernel.org
> Signed-off-by: Pierre Gondois <pierre.gondois@....com>
> ---
>  kernel/sched/fair.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9057584ec06d..6d9124499f52 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10775,8 +10775,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>                          * idle CPUs.
>                          */
>                         env->migration_type = migrate_task;
> -                       env->imbalance = max_t(long, 0,
> -                                              (local->idle_cpus - busiest->idle_cpus));
> +                       env->imbalance = local->idle_cpus;
> +                       lsub_positive(&env->imbalance, busiest->idle_cpus);
>                 }
>
>  #ifdef CONFIG_NUMA
> --
> 2.25.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ