[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <23c64fbdcdb14517a01afaecc370a914@ll.mit.edu>
Date: Tue, 16 Jul 2019 15:13:15 +0000
From: "Prout, Andrew - LLSC - MITLL" <aprout@...mit.edu>
To: Eric Dumazet <eric.dumazet@...il.com>,
Christoph Paasch <christoph.paasch@...il.com>
CC: "David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jonathan Looney <jtl@...flix.com>,
Neal Cardwell <ncardwell@...gle.com>,
Tyler Hicks <tyhicks@...onical.com>,
Yuchung Cheng <ycheng@...gle.com>,
Bruce Curtis <brucec@...flix.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
"Dustin Marquess" <dmarquess@...le.com>
Subject: RE: [PATCH net 2/4] tcp: tcp_fragment() should apply sane memory
limits
On 7/11/2019 11:15 PM, Prout, Andrew - LLSC - MITLL wrote:
> I in no way intended to imply that I had confirmed the small SO_SNDBUF was a prerequisite to our problem. With my synthetic test, I was looking for a concise reproducer and trying things to > stress the system.
I've looked into this some more: I am now able to reproduce it consistently (on 4.14.132) with my synthetic test seemingly regardless of the SO_SNDBUF size (tested 128KiB, 512KiB, 1MiB and 10MiB). The key change to make it reproducible was to use sendfile() instead of send() - this is what samba was doing in our real failure case. Looking at the code, it appears that sendfile() calls into the zerocopy functions which only check SNDBUF for memory they allocate (the skb structure) - they'll happily "overfill" the send buffer with zerocopy'd pages.
In my tests I can see the send buffer used reported by netstat getting to much larger values than the SO_SNDBUF value I set (with a 10MiB SO_SNDBUF, it got up to ~52MiB on a stalled connection). This of course easily causes the CVE-2019-11478 fix to false alarm and the TCP connection to stall.
Powered by blists - more mailing lists