[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1509720364.2849.56.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Fri, 03 Nov 2017 07:46:04 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Dave Jones <davej@...emonkey.org.uk>
Cc: netdev@...r.kernel.org
Subject: Re: [4.14rc6] __tcp_select_window divide by zero.
On Fri, 2017-11-03 at 09:37 -0400, Dave Jones wrote:
> On Tue, Oct 24, 2017 at 09:00:30AM -0400, Dave Jones wrote:
> > divide error: 0000 [#1] SMP KASAN
> > CPU: 0 PID: 31140 Comm: trinity-c12 Not tainted 4.14.0-rc6-think+ #1
> > RIP: 0010:__tcp_select_window+0x21f/0x400
> > Call Trace:
> > tcp_cleanup_rbuf+0x27d/0x2a0
> > tcp_recvmsg+0x7a9/0x1430
> > inet_recvmsg+0x10b/0x360
> > sock_read_iter+0x19d/0x240
> > do_iter_readv_writev+0x2e4/0x320
> > do_iter_read+0x149/0x280
> > vfs_readv+0x107/0x180
> > do_readv+0xc0/0x1b0
> > do_syscall_64+0x182/0x400
> > entry_SYSCALL64_slow_path+0x25/0x25
> > Code: 41 5e 41 5f c3 48 8d bb 48 09 00 00 e8 4b 2b 30 ff 8b 83 48 09 00 00 89 ea 44 29 f2 39 c2 7d 08 39 c5 0f 8d 86 01 00 00 89 e8 99 <41> f7 fe 89 e8 29 d0 eb 8c 41 f7 df 48 89 c7 44 89 f9 d3 fd e8
> > RIP: __tcp_select_window+0x21f/0x400 RSP: ffff8803df54f418
> >
> >
> > if (window <= free_space - mss || window > free_space)
> > window = rounddown(free_space, mss);
>
> I'm still hitting this fairly often, so I threw in a debug patch, and
> when this happens..
>
> [53182.361210] window: 0 free_space: 0 mss: 0
>
> Any suggestions on what we should default the window size to be in
> this situation to avoid the rounddown ?
Last time we had to deal with such issue, we fixed a root cause.
https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/commit/?id=2dda640040876cd8ae646408b69eea40c24f9ae9
If __tcp_select_window() is called while mss is 0, then we have a bug
elsewhere.
We want to keep the crash here so that we can fix the root cause.
If we work around the bug here, we will still have fundamental issues.
Do you have a C repro ?
You might get one with syzkaller instead of trinity.
Thanks
Powered by blists - more mailing lists