lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 16 Oct 2020 17:53:13 +0200 From: Eric Dumazet <edumazet@...gle.com> To: Aleksandr Nogikh <a.nogikh@...dex.ru> Cc: Stephen Hemminger <stephen@...workplumber.org>, Jamal Hadi Salim <jhs@...atatu.com>, Cong Wang <xiyou.wangcong@...il.com>, Jiri Pirko <jiri@...nulli.us>, David Miller <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Andrey Konovalov <andreyknvl@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>, Marco Elver <elver@...gle.com>, Randy Dunlap <rdunlap@...radead.org>, Dave Taht <dave.taht@...il.com>, netdev <netdev@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, Aleksandr Nogikh <nogikh@...gle.com>, syzbot+ec762a6342ad0d3c0d8f@...kaller.appspotmail.com Subject: Re: [PATCH] netem: prevent division by zero in tabledist On Fri, Oct 16, 2020 at 2:10 PM Aleksandr Nogikh <a.nogikh@...dex.ru> wrote: > > From: Aleksandr Nogikh <nogikh@...gle.com> > > Currently it is possible to craft a special netlink RTM_NEWQDISC > command that result in jitter being equal to 0x80000000. It is enough > to set 32 bit jitter to 0x02000000 (it will later be multiplied by > 2^6) or set 64 bit jitter via TCA_NETEM_JITTER64. This causes an > overflow during the generation of uniformly districuted numbers in > tabledist, which in turn leads to division by zero (sigma != 0, but > sigma * 2 is 0). > > The related fragment of code needs 32-bit division - see commit > 9b0ed89 ("netem: remove unnecessary 64 bit modulus"), so > switching to 64 bit is not an option. > > Fix the issue by preventing 32 bit integer overflows in > tabledist. Also, instead of truncating s64 integer to s32, truncate it > to u32, as negative standard deviation does not make sense anyway. > > Reported-by: syzbot+ec762a6342ad0d3c0d8f@...kaller.appspotmail.com > Signed-off-by: Aleksandr Nogikh <nogikh@...gle.com> > --- > net/sched/sch_netem.c | 15 ++++++++++++--- > 1 file changed, 12 insertions(+), 3 deletions(-) > > diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c > index 84f82771cdf5..d8b0bf1b5346 100644 > --- a/net/sched/sch_netem.c > +++ b/net/sched/sch_netem.c > @@ -315,7 +315,7 @@ static bool loss_event(struct netem_sched_data *q) > * std deviation sigma. Uses table lookup to approximate the desired > * distribution, and a uniformly-distributed pseudo-random source. > */ > -static s64 tabledist(s64 mu, s32 sigma, > +static s64 tabledist(s64 mu, u32 sigma, > struct crndstate *state, > const struct disttable *dist) > { > @@ -329,8 +329,14 @@ static s64 tabledist(s64 mu, s32 sigma, > rnd = get_crandom(state); > > /* default uniform distribution */ > - if (dist == NULL) > + if (!dist) { > + /* Sigma is too big to perform 32 bit division. > + * Use the widest possible deviation. > + */ > + if ((u64)sigma * 2ULL >= U32_MAX) Or simply if (sigma >= U32_MAX/2) > + return mu + rnd - U32_MAX / 2; Since only syzbot can possibly use these silly parameters, what about capping the parameters in control path, when netem is setup/changed, instead of adding a test in the fast path ?
Powered by blists - more mailing lists