lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADVnQynojNNJNszX5pV5EsOt+-pKnbH-Z4uEuJUyRX2aCPg8gQ@mail.gmail.com>
Date:   Mon, 8 Jul 2019 16:11:20 -0400
From:   Neal Cardwell <ncardwell@...gle.com>
To:     Carlo Wood <carlo@...noe.com>
Cc:     David Miller <davem@...emloft.net>, Netdev <netdev@...r.kernel.org>
Subject: Re: Kernel BUG: epoll_wait() (and epoll_pwait) stall for 206 ms per
 call on sockets with a small-ish snd/rcv buffer.

On Sat, Jul 6, 2019 at 2:19 PM Carlo Wood <carlo@...noe.com> wrote:
>
> While investigating this further, I read on
> http://www.masterraghu.com/subjects/np/introduction/unix_network_programming_v1.3/ch07lev1sec5.html
> under "SO_RCVBUF and SO_SNDBUF Socket Options":
>
>     When setting the size of the TCP socket receive buffer, the
>     ordering of the function calls is important. This is because of
>     TCP's window scale option (Section 2.6), which is exchanged with
>     the peer on the SYN segments when the connection is established.
>     For a client, this means the SO_RCVBUF socket option must be set
>     before calling connect. For a server, this means the socket option
>     must be set for the listening socket before calling listen. Setting
>     this option for the connected socket will have no effect whatsoever
>     on the possible window scale option because accept does not return
>     with the connected socket until TCP's three-way handshake is
>     complete. That is why this option must be set for the listening
>     socket. (The sizes of the socket buffers are always inherited from
>     the listening socket by the newly created connected socket: pp.
>     462–463 of TCPv2.)
>
> As mentioned in a previous post, I had already discovered about
> needing to set the socket buffers before connect, but I didn't know
> about setting them before the call to listen() in order to get the
> buffer sizes inherited by the accepted sockets.
>
> After fixing this in my test program, all problems disappeared when
> keeping the send and receive buffers the same on both sides.
>
> However, when only setting the send and receive buffers on the client
> socket (not on the (accepted or) listen socket), epoll_wait() still
> stalls 43ms. When the SO_SNDBUF is smaller than 33182 bytes.
>
> Here is the latest version of my test program:
>
> https://github.com/CarloWood/ai-evio-testsuite/blob/master/src/epoll_bug.c
>
> I have to retract most of my "bug" report, it might even not really be
> a bug then... but nevertheless, what remains strange is the fact
> that setting the socket buffer sizes on the accepted sockets can lead
> to so much crippling effect, while the quoted website states:
>
>     Setting this option for the connected socket will have no effect
>     whatsoever on the possible window scale option because accept does
>     not return with the connected socket until TCP's three-way
>     handshake is complete.
>
> And when only setting the socket buffer sizes for the client socket
> (that I use to send back received data; so this is the sending
> side now) then why does epoll_wait() stall 43 ms per call when the
> receiving side is using the default (much larger) socket buffer sizes?
>
> That 43 ms is STILL crippling-- slowing down the transmission of the
> data to a trickling speed compared to what it should be.

Based on the magic numbers you cite, including the fact that this test
program seems to send traffic over a loopback device (presumably
MTU=65536), epoll_wait() stalling 43 ms (slightly longer than the
typical Linux delayed ACK timer), and the problem only happening if
SO_SNDBUF is smaller than 33182 bytes (half the MTU)... a guess would
be that when you artificially make the SO_SNDBUF that low, then you
are asking the kernel to only allow your sending sockets to buffer
less than a single MTU of data, which means that the packets the
sender sends are going to be less than the MTU, which means that the
receiver may tend to eventually (after the initial quick ACKs expire)
delay its ACKs in hopes of eventually receiving a full MSS of data
(see __tcp_ack_snd_check()). Since the SO_SNDBUF is so small in this
case, the sender then would not be able to write() or transmit
anything else until the receiver sends a delayed ACK for the data
~40ms later, allowing the sending socket to free the previously sent
data and trigger the sender's next EPOLLOUT and write().

You could try grabbing a packet capture of the traffic over your
loopback device during the test to see if it matches that theory:
  tcpdump  -i lo -s 100 -w /tmp/test.pcap

cheers,
neal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ