[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHjP37H-bwDTkynV_pxWtaAF45VQV2mTfykAsYLigVz29_Zn4A@mail.gmail.com>
Date: Wed, 8 Nov 2017 11:04:14 -0500
From: Vitaly Davidovich <vitalyd@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev <netdev@...r.kernel.org>
Subject: Re: TCP connection closed without FIN or RST
So this issue is somehow related to setting SO_RCVBUF *after*
connecting the socket (from the client). The system is configured
such that the default rcvbuf size is 1MB, but the code was shrinking
this down to 75Kb right after connect(). I think that explains why
the window size advertised by the client was much larger than
expected. I see that the kernel does not want to shrink the
previously advertised window without advancement in the sequence
space. So my guess is that the client runs out of buffer and starts
dropping packets. Not sure how to further debug this from userspace
(systemtap? bpf?) - any tips on that front would be appreciated.
Thanks again for the help.
On Fri, Nov 3, 2017 at 5:33 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Fri, 2017-11-03 at 14:28 -0400, Vitaly Davidovich wrote:
>
>> So Eric, while I still have your interest here (although I know it's
>> waning :)), any code pointers to where I might look to see if a
>> specific small-ish rcv buf size may interact poorly with the rest of
>> the stack? Is it possible some buffer was starved in the client stack
>> which prevented it from sending any segments to the server? Maybe the
>> incoming retrans were actually dropped somewhere in the ingress pkt
>> processing and so the stack doesn't know it needs to react to
>> something? Pulling at straws here but clearly the recv buf size, and a
>> somewhat small one at that, has some play.
>>
>> I checked dmesg (just in case something would pop up there) but didn't
>> observe any warnings or anything interesting.
>
> I believe you could reproduce the issue with packetdrill.
>
> If you can provide a packetdrill file demonstrating the issue, that
> would be awesome ;)
>
>
>
Powered by blists - more mailing lists