lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Mar 2009 21:05:05 +0100
From:	Marian Ďurkovič <md@....sk>
To:	netdev@...r.kernel.org
Subject: Re: TCP rx window autotuning harmful at LAN context

On Mon, 9 Mar 2009 11:01:52 -0700, John Heffner wrote
> On Mon, Mar 9, 2009 at 4:25 AM, Marian Ďurkovič <md@....sk> wrote:
> >   As rx window autotuning is enabled in all recent kernels and with 1 GB
> > of RAM the maximum tcp_rmem becomes 4 MB, this problem is spreading
> >   rapidly
> > and we believe it needs urgent attention. As demontrated above, such
> >   huge
> > rx window (which is at least 100*BDP of the example above) does not
> >   deliver
> > any performance gain but instead it seriously harms other hosts and/or
> > applications. It should also be noted, that host with autotuning enabled
> > steals an unfair share of the total available bandwidth, which might
> > look
> > like a "better" performing TCP stack at first sight - however such
> > behaviour
> > is not appropriate (RFC2914, section 3.2).
>
> It's well known that "standard" TCP fills all available drop-tail
> buffers, and that this behavior is not desirable.

Well, in practice that was always limited by receive window size, which
was by default 64 kB on most operating systems. So this undesirable behavior
was limited to hosts where receive window was manually increased to huge
values.

Today, the real effect of autotuning is the same as changing the receive window
size to 4 MB on *all* hosts, since there's no mechanism to prevent it from
growing the window to maximum even for low RTT paths.

> The situation you describe is exactly what congestion control (the
> topic of RFC2914) should fix.  It is not the role of receive window
> (flow control).  It is really the sender's job to detect and react to
> this, not the receiver's.  (We have had this discussion before on
> netdev.)

It's not of high importance whose job it is according to pure theory.
What matters is, that autotuning introduced serious problem at LAN context
by disabling any possibility to properly react to increasing RTT. Again,
it's not important whether this functionality was there by design or by
coincidence, but it was holding the system well-balanced for many years.

Now, as autotuning is enabled by default in stock kernel, this problem is
spreading into LANs without users even knowing what's going on. Therefore
I'd like to suggest to look for a decent fix which could be implemented
in relatively short time frame. My proposal is this:

- measure RTT during the initial phase of TCP connection (first X segments)
- compute maximal receive window size depending on measured RTT using
  configurable constant representing the bandwidth part of BDP
- let autotuning do its work upto that limit.

  With kind regards,

        M. 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ