lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 9 Jul 2008 01:47:58 -0400
From:	Bill Fink <billfink@...dspring.com>
To:	Evgeniy Polyakov <johnpol@....mipt.ru>
Cc:	Stephen Hemminger <stephen.hemminger@...tta.com>,
	Roland Dreier <rdreier@...co.com>,
	David Miller <davem@...emloft.net>, aglo@...i.umich.edu,
	shemminger@...tta.com, netdev@...r.kernel.org, rees@...ch.edu,
	bfields@...ldses.org
Subject: Re: setsockopt()

On Wed, 9 Jul 2008, Evgeniy Polyakov wrote:

> On Tue, Jul 08, 2008 at 06:05:00PM -0400, Bill Fink (billfink@...dspring.com) wrote:
> > BTW I believe there is one other important difference between the way
> > the tcp_rmem/tcp_wmem autotuning parameters are handled versus the way
> > the rmem_max/wmem_max parameters are used when explicitly setting the
> > socket buffer sizes.  I believe the tcp_rmem/tcp_wmem autotuning maximum
> > parameters are hard limits, with the default maximum tcp_rmem setting
> > being ~170 KB and the default maximum tcp_wmem setting being 128 KB.
> 
> Maximum tcp_wmem depends on amount of available RAM, but at least 64k.
> Maybe Reoland's distro set hard limit just to 128k...

Are you sure you're not thinking about tcp_mem, which is a function
of available memory, or has this been changed in more recent kernels?
The 2.6.22.9 Documentation/networking/ip-sysctl.txt indicates:

tcp_wmem - vector of 3 INTEGERs: min, default, max
	...
	max: Maximal amount of memory allowed for automatically selected
	send buffers for TCP socket. This value does not override
	net.core.wmem_max, "static" selection via SO_SNDBUF does not use this.
	Default: 128K

I also ran a purely local 10-GigE nuttcp TCP test, with and without
autotuning (0.13 ms RTT).

Autotuning (standard 10-second TCP test):

# nuttcp 192.168.88.13
...
11818.0625 MB /  10.01 sec = 9906.0223 Mbps 100 %TX 72 %RX 0 retrans

Same test but with explicitly specified 1 MB socket buffer:

# nuttcp -w1m 192.168.88.13
...
11818.0000 MB /  10.01 sec = 9902.0102 Mbps 99 %TX 71 %RX 0 retrans

The TCP autotuning worked great, with both tests basically achieving
full 10-GigE line rate.  The test with the TCP autotuning actually
did slightly better than the test where an explicitly specified 1 MB
socket buffer was used, although this could just be within the margin
of error of the testing.

						-Bill
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ