lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 21 Dec 2011 20:26:04 -0500
From:	Simon Chen <>
To:	Ben Hutchings <>
Cc:	Ben Greear <>,
Subject: Re: under-performing bonded interfaces

Hi folks,

I added an Intel X520 card to both the sender and receiver... Now I
have two 10G ports on a PCIe 2.0 x8 slot (5Gx8), so the bandwidth of
the PCI bus shouldn't be the bottleneck.

Now the throughput test gives me around 16Gbps in aggregate. Any ideas
how I can push closer to 20G? I don't quite understand where the
bottleneck is now.


On Wed, Nov 16, 2011 at 9:51 PM, Ben Hutchings
<> wrote:
> On Wed, 2011-11-16 at 20:38 -0500, Simon Chen wrote:
>> Thanks, Ben. That's good discovery...
>> Are you saying that both 10G NICs are on the same PCIe x4 slot, so
>> that they're subject to the 12G throughput bottleneck?
> I assumed you were using 2 ports on the same board, i.e. the same slot.
> If you were using 1 port each of 2 boards then I would have expected
> them both to be usable at full speed.  So far as I can remember, PCIe
> bridges are usually set up so there isn't contention for bandwidth
> between slots.
> Ben.
> --
> Ben Hutchings, Staff Engineer, Solarflare
> Not speaking for my employer; that's the marketing department's job.
> They asked us to note that Solarflare product names are trademarked.
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists