lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1e41a3230903092055q2317e0cas3721d18fb4cef062@mail.gmail.com>
Date:	Mon, 9 Mar 2009 20:55:15 -0700
From:	John Heffner <johnwheffner@...il.com>
To:	Rick Jones <rick.jones2@...com>
Cc:	David Miller <davem@...emloft.net>, md@....sk,
	netdev@...r.kernel.org
Subject: Re: TCP rx window autotuning harmful at LAN context

On Mon, Mar 9, 2009 at 5:34 PM, Rick Jones <rick.jones2@...com> wrote:
> If I recall correctly, when I have asked about this behaviour in the past, I
> was told that the autotuning receiver would always try to offer the sender
> 2X what the receiver thought the sender's cwnd happened to be.  Is my
> recollection incorrect, or is this then:
>
> [root@...855 ~]# netperf -t omni -H sut42 -- -k foo -s 128K
> OMNI TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to sut42.west (10.208.0.45)
> port 0 AF_INET
> THROUGHPUT=941.30
> LSS_SIZE_REQ=131072
> LSS_SIZE=262142
> LSS_SIZE_END=262142
> RSR_SIZE_REQ=-1
> RSR_SIZE=87380
> RSR_SIZE_END=3900000
>
> not intended behaviour?  LSS == Local Socket Send; RSR == Remote Socket
> Receive.  dl5855 is running RHEL 5.2 (2.6.18-92.el5) sut42 is running a
> nf-next-2.6 about two or three weeks old with some of the 32-core scaling
> patches applied (2.6.29-rc5-nfnextconntrack)
>
> I'm assuming that by setting the SO_SNDBUF on the netperf (sending) side to
> 128K/256K that will be the limit on what it will ever put out onto the
> connection at one time, but by the end of the 10 second test over the local
> GbE LAN the receiver's autotuned SO_RCVBUF has grown to 3900000.


Hi Rick,

(Pretty sure we went over this already, but once more..)  The receiver
does not size to twice cwnd.  It sizes to twice the amount of data
that the application read in one RTT.  In the common case of a path
bottleneck and a receiving application that always keeps up, this
equals 2*cwnd, but the distinction is very important to understanding
its behavior in other cases.

In your test where you limit sndbuf to 256k, you will find that you
did not fill up the bottleneck queues, and you did not get a
significantly increased RTT, which are the negative effects we want to
avoid.  The large receive window caused no trouble at all.

  -John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ