[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26fvr2souk.fsf@sword-of-the-dawn.mtv.corp.google.com>
Date: Mon, 11 Nov 2013 11:46:59 -0800
From: bsegall@...gle.com
To: Michal Nazarewicz <mpn@...gle.com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, Michal Nazarewicz <mina86@...a86.com>
Subject: Re: [PATCH] sched: fair: avoid integer overflow
Michal Nazarewicz <mpn@...gle.com> writes:
> From: Michal Nazarewicz <mina86@...a86.com>
>
> sa->runnable_avg_sum is of type u32 but after shifting it by NICE_0_SHIFT
> bits it is promoted to u64. This of course makes no sense, since the
> result will never be more then 32-bit long. Casting sa->runnable_avg_sum
> to u64 before it is shifted, fixes this problem.
>
> Signed-off-by: Michal Nazarewicz <mina86@...a86.com>
Reviewed-by: Ben Segall <bsegall@...gle.com>
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index df77c60..50f1e170 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2153,7 +2153,7 @@ static inline void __update_tg_runnable_avg(struct sched_avg *sa,
> long contrib;
>
> /* The fraction of a cpu used by this cfs_rq */
> - contrib = div_u64(sa->runnable_avg_sum << NICE_0_SHIFT,
> + contrib = div_u64((u64)sa->runnable_avg_sum << NICE_0_SHIFT,
> sa->runnable_avg_period + 1);
> contrib -= cfs_rq->tg_runnable_contrib;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists