[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be35a6ae-ec41-ef6f-9244-44f061376949@juniper.net>
Date: Thu, 7 Apr 2022 21:10:11 -0400
From: Erin MacNeil <emacneil@...iper.net>
To: Eric Dumazet <eric.dumazet@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: TCP stack gets into state of continually advertising “silly window” size of 1
On 2022-04-07 4:31 p.m., Eric Dumazet wrote:
> [External Email. Be cautious of content]
>
>
> On 4/7/22 10:57, Erin MacNeil wrote:
>> In-Reply-To:
>> <BY3PR05MB80023CD8700DA1B1F203A975D0E79@...PR05MB8002.namprd05.prod.outlook.com>
>>
>>
>>> On 4/6/22 10:40, Eric Dumazet wrote:
>>>> On 4/6/22 07:19, Erin MacNeil wrote:
>>>> This issue has been observed with the 4.8.28 kernel, I am wondering
>>>> if it may be a known issue with an available fix?
>>>>
...
>>
>>> Presumably 16k buffers while MTU is 9000 is not correct.
>>>
>>> Kernel has some logic to ensure a minimal value, based on standard MTU
>>> sizes.
>>>
>>>
>>> Have you tried not using setsockopt() SO_RCVBUF & SO_SNDBUF ?
>> Yes, a temporary workaround for the issue is to increase the value of
>> SO_SNDBUF which reduces the likelihood of device A’s receive window
>> dropping to 0, and hence device B sending problematic TCP window probes.
>>
>
> Not sure how 'temporary' it is.
>
> For ABI reason, and the fact that setsockopt() can be performed
> _before_ the connect() or accept() is done, thus before knowing MTU
> size, we can not after the MTU is known increase buffers, as it might
>
> break some applications expecting getsockopt() to return a stable value
> (if a prior setsockopt() has set a value)
>
> If we chose to increase minimal limits, I think some users might complain.
>
Is this not a TCP bug though? The stream was initially working "ok"
until the window closed. There is no data the in the socket queue
should the window not re-open to where it had been.
Thanks
-Erin
Powered by blists - more mailing lists