lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E80BD20.2040301@candelatech.com>
Date:	Mon, 26 Sep 2011 10:57:52 -0700
From:	Ben Greear <greearb@...delatech.com>
To:	Chris Friesen <chris.friesen@...band.com>
CC:	Alexander Duyck <alexander.h.duyck@...el.com>,
	"e1000-devel@...ts.sourceforge.net" 
	<e1000-devel@...ts.sourceforge.net>,
	netdev <netdev@...r.kernel.org>,
	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
	"J.Hwan.Kim" <j.hwan.kim99@...il.com>, frog1120@...il.com
Subject: Re: [E1000-devel] intel 82599 multi-port performance

On 09/26/2011 10:46 AM, Chris Friesen wrote:
> On 09/26/2011 11:24 AM, Ben Greear wrote:
>> On 09/26/2011 09:40 AM, Chris Friesen wrote:
>
>>> To any of the Intel guys out there...any ideas? Can an 82599 on an 8x
>>> bus handle max line rate with minimum size packets?
>>
>> Rick Jones sent me an interesting link related to this. Short answer seems
>> to be 'yes', but it seems not for any normal off-the-shelf software stack.
>>
>> > This: http://comments.gmane.org/gmane.linux.network/203602 should
>> lead you to some slide.
>
> Interesting. I wonder if Intel's DPDK will be the only way to handle those sorts of packet rates.

Pktgen is probably still the fastest general code that I know of,
but we had some interesting results setting the TCP_MAXSEGS to
88, which creates around 150 byte packets, and let the NICs offload
chop up large TCP writes into small packets on the wire.

Using core-I7 980x CPU, and dual-port 82599, we could send
around 4Mpps and receive around 2Mpps between two machines.
We were using a single port on each NIC/machine for this test.  Connection
was a bit asymmetric, seems one side would over-power the other...so if
we twiddled a bit, we could get around 3Mpps in each direction.

Our user-space app has some over-head as well, but we can send
at least 5Gbps full duplex on two ports using normal sized frames, so I
think the bottleneck in this case is the TCP offload in the NIC.

Still, pretty impressive for stateful TCP packets per second :)

Top-of-tree netperf just learned to do the TCP_MAXSEG trick as well,
so it might be fun to play with that.  It probably has less overhead
than our tool, so might run even faster.

Thanks,
Ben

-- 
Ben Greear <greearb@...delatech.com>
Candela Technologies Inc  http://www.candelatech.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ