lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080708144817.3c364962.billfink@mindspring.com>
Date:	Tue, 8 Jul 2008 14:48:17 -0400
From:	Bill Fink <billfink@...dspring.com>
To:	Roland Dreier <rdreier@...co.com>
Cc:	Evgeniy Polyakov <johnpol@....mipt.ru>,
	David Miller <davem@...emloft.net>, aglo@...i.umich.edu,
	shemminger@...tta.com, netdev@...r.kernel.org, rees@...ch.edu,
	bfields@...ldses.org
Subject: Re: setsockopt()

Hi Roland,

I think you set a new nuttcp speed record.  :-)
I've merely had 10-GigE networks to play with.

On Mon, 07 Jul 2008, Roland Dreier wrote:

> Interesting... I'd not tried nuttcp before, and on my testbed, which is
> a very high-bandwidth, low-RTT network (IP-over-InfiniBand with DDR IB,
> so the network is capable of 16 Gbps, and the RTT is ~25 microseconds),
> the difference between autotuning and not for nuttcp is huge (testing
> with 2.6.26-rc8 plus some pending 2.6.27 patches that add checksum
> offload, LSO and LRO to the IP-over-IB driver):
> 
> nuttcp -T30 -i1 ends up with:
> 
> 14465.0625 MB /  30.01 sec = 4043.6073 Mbps 82 %TX 2 %RX
> 
> while setting the window even to 128 KB with
> nuttcp -w128k -T30 -i1 ends up with:
> 
> 36416.8125 MB /  30.00 sec = 10182.8137 Mbps 90 %TX 96 %RX
> 
> so it's a factor of 2.5 with nuttcp.  I've never seen other apps behave
> like that -- for example NPtcp (netpipe) only gets slower when
> explicitly setting the window size.
> 
> Strange...

It is strange.  The first case just uses the TCP autotuning, since
as you discovered, nuttcp doesn't make any SO_SNDBUF/SO_RCVBUF
setsockopt() calls unless you explicitly set the "-w" option.
Perhaps the maximum value for tcp_rmem/tcp_wmem is smallish on
your systems (check both client and server).

On my system:

# cat /proc/sys/net/ipv4/tcp_rmem
4096    524288  104857600
# cat /proc/sys/net/ipv4/tcp_wmem
4096    524288  104857600

IIRC the explicit setting of SO_SNDBUF/SO_RCVBUF is instead governed
by rmem_max/wmem_max.

# cat /proc/sys/net/core/rmem_max
104857600
# cat /proc/sys/net/core/wmem_max
104857600

The other weird thing about your test is the huge difference in
the receiver (and server in this case) CPU utilization between the
autotuning and explicit setting cases (2 %RX versus 96 %RX).

						-Bill
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ