lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 28 Sep 2012 14:33:07 +0800 From: Cong Wang <amwang@...hat.com> To: Neil Horman <nhorman@...driver.com> Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>, Alexey Kuznetsov <kuznet@....inr.ac.ru>, Patrick McHardy <kaber@...sh.net>, Eric Dumazet <edumazet@...gle.com> Subject: Re: [RFC PATCH net-next] tcp: introduce tcp_tw_interval to specifiy the time of TIME-WAIT On Thu, 2012-09-27 at 10:23 -0400, Neil Horman wrote: > On Thu, Sep 27, 2012 at 04:41:01PM +0800, Cong Wang wrote: > > Some customer requests this feature, as they stated: > > > > "This parameter is necessary, especially for software that continually > > creates many ephemeral processes which open sockets, to avoid socket > > exhaustion. In many cases, the risk of the exhaustion can be reduced by > > tuning reuse interval to allow sockets to be reusable earlier. > > > > In commercial Unix systems, this kind of parameters, such as > > tcp_timewait in AIX and tcp_time_wait_interval in HP-UX, have > > already been available. Their implementations allow users to tune > > how long they keep TCP connection as TIME-WAIT state on the > > millisecond time scale." > > > > We indeed have "tcp_tw_reuse" and "tcp_tw_recycle", but these tunings > > are not equivalent in that they cannot be tuned directly on the time > > scale nor in a safe way, as some combinations of tunings could still > > cause some problem in NAT. And, I think second scale is enough, we don't > > have to make it in millisecond time scale. > > > I think I have a little difficultly seeing how this does anything other than > pay lip service to actually having sockets spend time in TIME_WAIT state. That > is to say, while I see users using this to just make the pain stop. If we wait > less time than it takes to be sure that a connection isn't being reused (either > by waiting two segment lifetimes, or by checking timestamps), then you might as > well not wait at all. I see how its tempting to be able to say "Just don't wait > as long", but it seems that theres no difference between waiting half as long as > the RFC mandates, and waiting no time at all. Neither is a good idea. I don't think reducing TIME_WAIT is a good idea either, but there must be some reason behind as several UNIX provides a microsecond-scale tuning interface, or maybe in non-recycle mode, their RTO is much less than 2*MSL? > > Given the problem you're trying to solve here, I'll ask the standard question in > response: How does using SO_REUSEADDR not solve the problem? Alternatively, in > a pinch, why not reduce the tcp_max_tw_buckets sufficiently to start forcing > TIME_WAIT sockets back into CLOSED state? > > The code looks fine, but the idea really doesn't seem like a good plan to me. > I'm sure HPUX/Solaris/AIX/etc have done this in response to customer demand, but > that doesn't make it the right solution. > *I think* the customer doesn't want to modify their applications, so that is why they don't use SO_REUSERADDR. I didn't know tcp_max_tw_buckets can do the trick, nor the customer, so this is a side effect of tcp_max_tw_buckets? Is it documented? Thanks. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists