lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071003031906.5f0d7cfd.billfink@mindspring.com>
Date:	Wed, 3 Oct 2007 03:19:06 -0400
From:	Bill Fink <billfink@...dspring.com>
To:	Rick Jones <rick.jones2@...com>
Cc:	Larry McVoy <lm@...mover.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	davem@...emloft.net, wscott@...mover.com, netdev@...r.kernel.org
Subject: Re: tcp bw in 2.6

Tangential aside:

On Tue, 02 Oct 2007, Rick Jones wrote:

> *) depending on the quantity of CPU around, and the type of test one is running, 
> results can be better/worse depending on the CPU to which you bind the 
> application.  Latency tends to be best when running on the same core as takes 
> interrupts from the NIC, bulk transfer can be better when running on a different 
> core, although generally better when a different core on the same chip.  These
> days the throughput stuff is more easily seen on 10G, but the netperf service 
> demand changes are still visible on 1G.

Interesting.  I was going to say that I've generally had the opposite
experience when it comes to bulk data transfers, which is what I would
expect due to CPU caching effects, but that perhaps it's motherboard/NIC/
driver dependent.  But in testing I just did I discovered it's even
MTU dependent (most of my normal testing is always with 9000-byte
jumbo frames).

With Myricom 10-GigE NICs, NIC interrupts on CPU 0 and nuttcp app
running on CPU 1 (both transmit and receive sides), and using 9000-byte
jumbo frames:

[root@...g2 ~]# nuttcp -w10m 192.168.88.16
10078.5000 MB /  10.02 sec = 8437.5396 Mbps 100 %TX 99 %RX

With Myricom 10-GigE NICs, and both NIC interrupts and nuttcp app
on CPU 0 (both transmit and receive sides), again using 9000-byte
jumbo frames:

[root@...g2 ~]# nuttcp -w10m 192.168.88.16
11817.8750 MB /  10.00 sec = 9909.7537 Mbps 100 %TX 74 %RX

Same tests repeated with standard 1500-byte Ethernet MTU:

With Myricom 10-GigE NICs, NIC interrupts on CPU 0 and nuttcp app
running on CPU 1 (both transmit and receive sides), and using
standard 1500-byte Ethernet MTU:

[root@...g2 ~]# nuttcp -M1460 -w10m 192.168.88.16
 5685.9375 MB /  10.00 sec = 4768.0951 Mbps 99 %TX 98 %RX

With Myricom 10-GigE NICs, and both NIC interrupts and nuttcp app
on CPU 0 (both transmit and receive sides), again using standard
1500-byte Ethernet MTU:

[root@...g2 ~]# nuttcp -M1460 -w10m 192.168.88.16
 4974.0625 MB /  10.03 sec = 4161.6015 Mbps 100 %TX 100 %RX

Now back to your regularly scheduled programming.  :-)

						-Bill
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ