[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0905211018480.20475@ask.diku.dk>
Date: Thu, 21 May 2009 10:35:55 +0200 (CEST)
From: Jesper Dangaard Brouer <hawk@...u.dk>
To: Ben Greear <greearb@...delatech.com>
Cc: NetDev <netdev@...r.kernel.org>,
Robert Olsson <Robert.Olsson@...a.slu.se>
Subject: Re: How fast can your 10G go?
On Tue, 19 May 2009, Ben Greear wrote:
> I've been running some tests on a new Nehalem based system
> with a 2 port pci-e x8 10G NIC (ixgbe driver).
>
> When using pktgen, max I can get is about 5.6Gbps tx + rx on both ports.
> This is about 22Gbps across the backplane, so I don't mean to complain :)
>
> However, I'm curious if anyone has gotten any better performance on
> some other system?
Robert Olsson (and Olof Hagsand and Bengt Gördén) got some better results
using that exact hardware Intel 82598 chips (and Sun neptune NICs). They
made an article called: "Open-source routing at 10Gb/s"
https://www.iis.se/docs/10G-OS-router_2_.pdf
> In particular, it seems that my system is bound by
> the bus and/or the NIC. Would I need to find something like a x16 slot
> to have a chance at 10Gbps bi-directional on 2 ports?
Are you doing a single flow bandwidth test?
These NICs designed for multiflow performance. They have hardware RX and
TX queues to facilitate this, often call multiqueue NICs. This is also
what helps us do real parallel processing across CPUs in the network stack
(since DaveMs multiqueue changes, although not all drivers uses this
correctly yet...).
Try doing a multi-flow network test, I bet you will see better results.
(Normally you also need to adjust smp-affinity, but with large frames on
my Core i7 system its fast enough for 9.5 Gbit/s without tuning, small
frames is another case)
Cheers,
Jesper Brouer
--
-------------------------------------------------------------------
MSc. Master of Computer Science
Dept. of Computer Science, University of Copenhagen
Author of http://www.adsl-optimizer.dk
-------------------------------------------------------------------
Powered by blists - more mailing lists