[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1426900511.25985.38.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Fri, 20 Mar 2015 18:15:11 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Wolfgang Rosner <wrosner@...net.de>
Cc: netdev@...r.kernel.org
Subject: Re: One way TCP bottleneck over 6 x Gbit tecl aggregated link
On Sat, 2015-03-21 at 01:10 +0100, Wolfgang Rosner wrote:
> Hello,
>
> Im trying to configure a beowulf style cluster based on venerable HP blade
> server hardware.
> I configured 6 parallel GBit vlans between 16 blade nodes and a gateway server
> with teql link aggregation.
>
> After lots of tuning, nearly everything runs fine (i.e. > 5,5 GBit/s iperf
> transfer rate, which is 95 % of theoretical limit), but one bottleneck
> remains left:
>
> From gateway to blade nodes, I get only half of full rate if I use only a
> single iperf process / single TCP link.
> With 2 or more iperf in parallel, transfer rate is OK.
>
> I don't see this bottleneck in the other direction, nor in the links between
> the blade nodes:
> there I have always > 5,5 GBit, even for a single process.
>
> Is there just a some simple tuning paramter I overlooked, or do I have to dig
> for a deeper cause?
What linux version runs on sender ?
Could you send output of :
nstat >/dev/null
iperf -c 192.168.130.225
nstat
Also, please send ss output while iperf is running as in :
(please use a recent ss command, found in iproute2 package :
https://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/
so that it outputs the reordering level)
iperf -c 192.168.130.225 &
ss -temoi dst 192.168.130.225
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists