lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 23 Apr 2019 20:44:58 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Xie XiuQi <xiexiuqi@...wei.com>
Cc:     mingo@...hat.com, linux-kernel@...r.kernel.org,
        cj.chengjian@...wei.com
Subject: Re: [PATCH] sched: fix a potential divide error

On Sat, Apr 20, 2019 at 04:34:16PM +0800, Xie XiuQi wrote:
> We meet a divide error on 3.10.0 kernel, the error message is bellow:

That is a _realllllllyyyy_ old kernel. I would urge you to upgrade.

> [499992.287996] divide error: 0000 [#1] SMP

> sched_clock_cpu may not be consistent bwtwen cpus. If a task migrate
> to another cpu, the se.exec_start was set to that cpu's rq_clock_task
> by update_stats_curr_start(). Which may not be monotonic.
> 
> update_stats_curr_start
>   <- set_next_entity
>      <- set_curr_task_fair
>         <- sched_move_task

That is not in fact a cross-cpu migration path. But I see the point.
Also many migration paths do in fact preserve monotonicity, even when
the clock is busted, but you're right, not all of them.

> So, if  now - p->last_task_numa_placement is -1, then (*period + 1) is
> 0, and divide error was triggerred at the div operation:
>   task_numa_placement:
>     runtime = numa_get_avg_runtime(p, &period);
>     f_weight = div64_u64(runtime << 16, period + 1);  // divide error here
> 
> In this patch, we make sure period is not less than 0 to avoid this
> divide error.
> 
> Signed-off-by: Xie XiuQi <xiexiuqi@...wei.com>
> Cc: stable@...r.kernel.org
> ---
>  kernel/sched/fair.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 40bd1e27b1b7..f2abb258fc85 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2007,6 +2007,10 @@ static u64 numa_get_avg_runtime(struct task_struct *p, u64 *period)
>  	if (p->last_task_numa_placement) {
>  		delta = runtime - p->last_sum_exec_runtime;
>  		*period = now - p->last_task_numa_placement;
> +
> +		/* Avoid backward, and prevent potential divide error */
> +		if ((s64)*period < 0)
> +			*period = 0;
>  	} else {
>  		delta = p->se.avg.load_sum;
>  		*period = LOAD_AVG_MAX;

Yeah, I suppose that is indeed correct.

I'll try and come up with a better Changelog tomorrow.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ