[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANj2EbcATh4Zmjs2YMypCajjLzb0mF7HHit3KVmbfLX2mav5rA@mail.gmail.com>
Date: Fri, 23 Dec 2011 10:03:31 -0500
From: Simon Chen <simonchennj@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Ben Hutchings <bhutchings@...arflare.com>,
Ben Greear <greearb@...delatech.com>, netdev@...r.kernel.org
Subject: Re: under-performing bonded interfaces
It's a funny thing again... I left the bandwidth test running
over-night - I used 16 simple senders and receivers.
The bandwidth slowly climbed from around 16G to 19G, which is much
better. I suspect two causes: 1) regular tcp implementation that has
slow-start and aggressive back-off; 2) user-land application...
-Simon
On Thu, Dec 22, 2011 at 12:43 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le mercredi 21 décembre 2011 à 20:26 -0500, Simon Chen a écrit :
>> Hi folks,
>>
>> I added an Intel X520 card to both the sender and receiver... Now I
>> have two 10G ports on a PCIe 2.0 x8 slot (5Gx8), so the bandwidth of
>> the PCI bus shouldn't be the bottleneck.
>>
>> Now the throughput test gives me around 16Gbps in aggregate. Any ideas
>> how I can push closer to 20G? I don't quite understand where the
>> bottleneck is now.
>
> Could you post some "perf top" or "perf record / report" numbers ?
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists