lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 16 Oct 2020 12:10:07 +0000
From:   Aleksandr Nogikh <a.nogikh@...dex.ru>
To:     stephen@...workplumber.org, jhs@...atatu.com,
        xiyou.wangcong@...il.com, jiri@...nulli.us, davem@...emloft.net,
        kuba@...nel.org
Cc:     andreyknvl@...gle.com, dvyukov@...gle.com, elver@...gle.com,
        rdunlap@...radead.org, dave.taht@...il.com, edumazet@...gle.com,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        Aleksandr Nogikh <nogikh@...gle.com>,
        syzbot+ec762a6342ad0d3c0d8f@...kaller.appspotmail.com
Subject: [PATCH] netem: prevent division by zero in tabledist

From: Aleksandr Nogikh <nogikh@...gle.com>

Currently it is possible to craft a special netlink RTM_NEWQDISC
command that result in jitter being equal to 0x80000000. It is enough
to set 32 bit jitter to 0x02000000 (it will later be multiplied by
2^6) or set 64 bit jitter via TCA_NETEM_JITTER64. This causes an
overflow during the generation of uniformly districuted numbers in
tabledist, which in turn leads to division by zero (sigma != 0, but
sigma * 2 is 0).

The related fragment of code needs 32-bit division - see commit
9b0ed89 ("netem: remove unnecessary 64 bit modulus"), so
switching to 64 bit is not an option.

Fix the issue by preventing 32 bit integer overflows in
tabledist. Also, instead of truncating s64 integer to s32, truncate it
to u32, as negative standard deviation does not make sense anyway.

Reported-by: syzbot+ec762a6342ad0d3c0d8f@...kaller.appspotmail.com
Signed-off-by: Aleksandr Nogikh <nogikh@...gle.com>
---
 net/sched/sch_netem.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 84f82771cdf5..d8b0bf1b5346 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -315,7 +315,7 @@ static bool loss_event(struct netem_sched_data *q)
  * std deviation sigma.  Uses table lookup to approximate the desired
  * distribution, and a uniformly-distributed pseudo-random source.
  */
-static s64 tabledist(s64 mu, s32 sigma,
+static s64 tabledist(s64 mu, u32 sigma,
 		     struct crndstate *state,
 		     const struct disttable *dist)
 {
@@ -329,8 +329,14 @@ static s64 tabledist(s64 mu, s32 sigma,
 	rnd = get_crandom(state);
 
 	/* default uniform distribution */
-	if (dist == NULL)
+	if (!dist) {
+		/* Sigma is too big to perform 32 bit division.
+		 * Use the widest possible deviation.
+		 */
+		if ((u64)sigma * 2ULL >= U32_MAX)
+			return mu + rnd - U32_MAX / 2;
 		return ((rnd % (2 * sigma)) + mu) - sigma;
+	}
 
 	t = dist->table[rnd % dist->size];
 	x = (sigma % NETEM_DIST_SCALE) * t;
@@ -533,7 +539,10 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 		u64 now;
 		s64 delay;
 
-		delay = tabledist(q->latency, q->jitter,
+		/* tabledist is unable to handle 64 bit jitters yet, so we adjust it beforehand */
+		u32 constrained_jitter = q->jitter > 0 ? min_t(u32, q->jitter, U32_MAX) : 0;
+
+		delay = tabledist(q->latency, constrained_jitter,
 				  &q->delay_cor, q->delay_dist);
 
 		now = ktime_get_ns();

base-commit: bbf5c979011a099af5dc76498918ed7df445635b
-- 
2.29.0.rc1.297.gfa9743e501-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ