[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANj2Eben0hrP6KwxyA1WPqiqzm84w=J2_sdtrKtGvxdftuksqg@mail.gmail.com>
Date: Wed, 21 Dec 2011 20:26:04 -0500
From: Simon Chen <simonchennj@...il.com>
To: Ben Hutchings <bhutchings@...arflare.com>
Cc: Ben Greear <greearb@...delatech.com>, netdev@...r.kernel.org
Subject: Re: under-performing bonded interfaces
Hi folks,
I added an Intel X520 card to both the sender and receiver... Now I
have two 10G ports on a PCIe 2.0 x8 slot (5Gx8), so the bandwidth of
the PCI bus shouldn't be the bottleneck.
Now the throughput test gives me around 16Gbps in aggregate. Any ideas
how I can push closer to 20G? I don't quite understand where the
bottleneck is now.
Thanks.
-Simon
On Wed, Nov 16, 2011 at 9:51 PM, Ben Hutchings
<bhutchings@...arflare.com> wrote:
> On Wed, 2011-11-16 at 20:38 -0500, Simon Chen wrote:
>> Thanks, Ben. That's good discovery...
>>
>> Are you saying that both 10G NICs are on the same PCIe x4 slot, so
>> that they're subject to the 12G throughput bottleneck?
>
> I assumed you were using 2 ports on the same board, i.e. the same slot.
> If you were using 1 port each of 2 boards then I would have expected
> them both to be usable at full speed. So far as I can remember, PCIe
> bridges are usually set up so there isn't contention for bandwidth
> between slots.
>
> Ben.
>
> --
> Ben Hutchings, Staff Engineer, Solarflare
> Not speaking for my employer; that's the marketing department's job.
> They asked us to note that Solarflare product names are trademarked.
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists