[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EF2A47A.3010006@candelatech.com>
Date: Wed, 21 Dec 2011 19:31:06 -0800
From: Ben Greear <greearb@...delatech.com>
To: Stephen Hemminger <shemminger@...tta.com>
CC: Simon Chen <simonchennj@...il.com>,
Ben Hutchings <bhutchings@...arflare.com>,
netdev@...r.kernel.org
Subject: Re: under-performing bonded interfaces
On 12/21/2011 05:36 PM, Stephen Hemminger wrote:
> On Wed, 21 Dec 2011 20:26:04 -0500
> Simon Chen<simonchennj@...il.com> wrote:
>
>> Hi folks,
>>
>> I added an Intel X520 card to both the sender and receiver... Now I
>> have two 10G ports on a PCIe 2.0 x8 slot (5Gx8), so the bandwidth of
>> the PCI bus shouldn't be the bottleneck.
>>
>> Now the throughput test gives me around 16Gbps in aggregate. Any ideas
>> how I can push closer to 20G? I don't quite understand where the
>> bottleneck is now.
>
> In my experience, Intel dual port cards can not run at full speed
> when both ports are in use. You need separate slots to hit full
> line rate.
We can run 2 ports right at 10Gbps tx + rx using a core-i7 980x
processor and 5gt/s pci-e bus. This is using a modified version of
pktgen to generate traffic. We can only push around 6 Gbps tx + rx
when generating tcp traffic to/from user-space, but our tcp generator
is not as optimized for bulk transfer as it could be.
Thanks,
Ben
--
Ben Greear <greearb@...delatech.com>
Candela Technologies Inc http://www.candelatech.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists