[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58CACBC9.40900@5t9.de>
Date: Thu, 16 Mar 2017 18:30:49 +0100
From: Lutz Vieweg <lvml@....de>
To: Neal Cardwell <ncardwell@...gle.com>
CC: Willy Tarreau <w@....eu>, David Miller <davem@...emloft.net>,
Soheil Hassas Yeganeh <soheil.kdev@...il.com>,
Netdev <netdev@...r.kernel.org>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Yuchung Cheng <ycheng@...gle.com>,
Florian Westphal <fw@...len.de>
Subject: Re: [PATCH net-next 1/2] tcp: remove per-destination timestamp cache
On 03/16/2017 04:40 PM, Neal Cardwell wrote:
>> I currently wonder: What it the correct advise to an operator who needs
>> to run one server instance that is meant to accept thousands of new,
>> short-lived TCP connections per minute?
>
> Note that for this to be a problem there would have to be thousands of
> new, short-lived TCP connections per minute from a single source IP
> address to a single destination IP address. Normal client software
> should not be doing this. AFAIK this is pretty rare, unless someone is
> running a load test or has an overly-aggressive monitoring system.
Indeed, I meanwhile found that a load/regression test scenario had
been the rationale for the tcp_tw_recycle = 1 setting - when a
recorded log of hundreds of thousands connections (each placing
one or a few requests) was replayed, this failed due to excessive
number of TIME_WAIT state connections.
Do I understand correctly that "tcp_tw_recycle = 1" is fine
in such a scenario as one can be sure both client and server
are at fixed, not-NATed IP addresses?
I wonder whether there might be a possibility to limit the use
of "tcp_tw_recycle = 1" to either a certain address or listen-port range?
If not, I guess our best option at this time is to advise
enabling "tcp_tw_recycle = 1" only while explicitely performing
local load/regression tests, and to disable it otherwise.
(This however means that running both automated continous integration
tests and any services for remote clients on the same system
would not mix well, as the setting could be "right" for only one
of them.)
> (1) use longer connections from the client side
Sure, in cases where that is under our control, we do exactly that.
> (2) have the client do the close(), so the client is the side to carry the
> TIME_WAIT state
In the load/regression test scenario, we are both server and client,
so I guess this would not help.
> (3) have the server use SO_LINGER with a timeout of 0, so that
> the connection is closed with a RST and the server carries no
> TIME_WAIT state
Potentially losing the end of some conversation is not really
an option for most protocols / use cases.
Regards,
Lutz Vieweg
Powered by blists - more mailing lists