[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140104082245.GA23837@1wt.eu>
Date: Sat, 4 Jan 2014 09:22:45 +0100
From: Willy Tarreau <w@....eu>
To: David Miller <davem@...emloft.net>
Cc: eric.dumazet@...il.com, netdev@...r.kernel.org
Subject: Re: [PATCH net-next] tcp: do not increase the rcv window when the FIN has been received
Hi David,
On Fri, Jan 03, 2014 at 07:58:10PM -0500, David Miller wrote:
> From: Willy Tarreau <w@....eu>
> Date: Thu, 2 Jan 2014 23:40:21 +0100
>
> > In HTTP performance tests it appeared that my client was always sending
> > an ACK immediately after receiving the FIN from the server and that the
> > sole purpose of this ACK was to advertise a larger window.
>
> I guess the question is what behavior do we want here.
>
> Frankly, I think we should always immediately ACK a FIN _unless_ we
> already have data pending on the send queue on which to piggyback that
> ACK.
>
> The reason is that since we know there will be no more data, delaying
> the ACK has none of the useful characteristics. In fact, sending the
> ACK immediately will allow the closing side to release the data in it's
> retransmit queue, and thus reclaim memory, more quickly.
Yes but on the other hand, most often when we receive a FIN, there is
immediate local action. Either data are pending and will serve as an
ACK, or the local endpoint will immediately close as well. Currently,
if the application doesn't react fast enough, the ACK is emitted after
40 ms anyway. So a properly designed application has an opportunity of
40 ms to react quickly and save this ACK. Currently it works only when
the data accompanying the FIN is less than 536 bytes, and my point was
to ensure that larger responses were covered by the same possibility
as well.
> I'm not so sure about this change, so I'm marking it deferred.
OK no problem.
Thanks,
Willy
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists