[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200111094911.801043901@linuxfoundation.org>
Date: Sat, 11 Jan 2020 10:50:50 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Wen Yang <wenyang@...ux.alibaba.com>,
Kevin Darbyshire-Bryant <ldir@...byshire-bryant.me.uk>,
Toke Høiland-Jørgensen <toke@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Cong Wang <xiyou.wangcong@...il.com>,
cake@...ts.bufferbloat.net, netdev@...r.kernel.org,
Toke Høiland-Jørgensen <toke@...e.dk>
Subject: [PATCH 4.19 73/84] sch_cake: avoid possible divide by zero in cake_enqueue()
From: Wen Yang <wenyang@...ux.alibaba.com>
[ Upstream commit 68aab823c223646fab311f8a6581994facee66a0 ]
The variables 'window_interval' is u64 and do_div()
truncates it to 32 bits, which means it can test
non-zero and be truncated to zero for division.
The unit of window_interval is nanoseconds,
so its lower 32-bit is relatively easy to exceed.
Fix this issue by using div64_u64() instead.
Fixes: 7298de9cd725 ("sch_cake: Add ingress mode")
Signed-off-by: Wen Yang <wenyang@...ux.alibaba.com>
Cc: Kevin Darbyshire-Bryant <ldir@...byshire-bryant.me.uk>
Cc: Toke Høiland-Jørgensen <toke@...hat.com>
Cc: David S. Miller <davem@...emloft.net>
Cc: Cong Wang <xiyou.wangcong@...il.com>
Cc: cake@...ts.bufferbloat.net
Cc: netdev@...r.kernel.org
Cc: linux-kernel@...r.kernel.org
Acked-by: Toke Høiland-Jørgensen <toke@...e.dk>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/sched/sch_cake.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -1758,7 +1758,7 @@ static s32 cake_enqueue(struct sk_buff *
q->avg_window_begin));
u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC;
- do_div(b, window_interval);
+ b = div64_u64(b, window_interval);
q->avg_peak_bandwidth =
cake_ewma(q->avg_peak_bandwidth, b,
b > q->avg_peak_bandwidth ? 2 : 8);
Powered by blists - more mailing lists