[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1410872966.7106.187.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Tue, 16 Sep 2014 06:09:26 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Andrey Dmitrov <andrey.dmitrov@...etlabs.ru>
Cc: Hannes Frederic Sowa <hannes@...essinduktion.org>,
netdev@...r.kernel.org,
"Alexandra N. Kossovsky" <Alexandra.Kossovsky@...etlabs.ru>,
Konstantin Ushakov <kostik@...etlabs.ru>
Subject: Re: TCP connection will hang in FIN_WAIT1 after closing if zero
window is advertised
On Tue, 2014-09-16 at 16:47 +0400, Andrey Dmitrov wrote:
> On 16/09/14 03:15, Hannes Frederic Sowa wrote:
> > Also thanks for the report.
> >
> > Do you see any tcp window repair messages in dmesg? Can you send some
> > output of ss -oemit state FIN-WAIT-1 from the target host?
> Hannes,
> no, there aren't any messages in dmesg until net.ipv4.tcp_max_orphans is
> achieved.
Andrey, you should take a look at Labrea Tarpit,
http://www.sans.org/reading-room/whitepapers/casestudies/smart-ids-hybrid-labrea-tarpit-33254
What happens is the following :
A normal TCP session is established, traffic is sent from server to
client.
Client sends a zero window.
1) This can be normal, because application reading client queue no
longer can. (For example its a ssh session, and output to the terminal
is blocked by CTRL S). There are valid cases when you block this for
many hours.
2) This can be faked by malicious peer, willing to make server enter
this mode (inability to send more data, data stack in output queue, one
probe sent every RTO). This is a very well known way to let servers
consume a lot of kernel memory and eventually OOM.
Then server sends a probe every RTO, and client responds with a ACK with
win=0
TCP specs say : This can last forever, even if socket is eventually
closed by the server (because he gave up) and enters FIN_WAIT
Supposedly, if a server is about to give up, it might tell the TCP
stack : Oh, do not bother absolutely sending the remaining bytes you
have in output queue (I, the application, already waited for a very
reasonable time)
Normally SO_LINGER could be used, or TCP_USER_TIMEOUT. This requires a
system call before doing the close().
1) TCP_USER_TIMEOUT would be the fit for this, but its current
implementation do not take care of the probes sent, even in FIN_WAIT
state when in this zero window mode. A patch would be needed.
2) SO_LINGER, timeout=0 might work.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists