[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5115AC6F.50305@hp.com>
Date: Fri, 08 Feb 2013 17:54:55 -0800
From: Rick Jones <rick.jones2@...com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: Hannes Frederic Sowa <hannes@...essinduktion.org>,
Emmanuel Jeanvoine <emmanuel.jeanvoine@...ia.fr>,
netdev@...r.kernel.org
Subject: Re: Poor TCP bandwidth between network namespaces
On 02/08/2013 05:33 PM, Eric Dumazet wrote:
> On Mon, 2013-02-04 at 23:52 +0100, Hannes Frederic Sowa wrote:
>> On Mon, Feb 04, 2013 at 03:43:20PM +0100, Emmanuel Jeanvoine wrote:
>>> I'm wondering why the overhead is so high when performing TCP
>>> transfers between two network namespaces. Do you have any idea about
>>> this issue? And possibly, how to increase the bandwidth (without
>>> modifying the MTU on the veths) between network namespaces?
>>
>> You could try Eric's patch (already in net-next) and have a look at the rest
>> of the discussion:
>>
>> http://article.gmane.org/gmane.linux.network/253589
>
> Another thing to consider is the default MTU value :
>
> 65536 for lo, and 1500 for veth
>
> It easily explains half performance for veth
>
> One another thing is the tx-nocache-copy setting, this can explain some
> extra percents.
Whenever I want to avoid matters of MTU, I try going with a test that
never sends anything larger than the smaller of the MTUs involved. One
such example might be (aggregate) netperf TCP_RR tests. Matters of path
length have a much more difficult time "hiding" from a TCP_RR (or
UDP_RR) test than a bulk transfer test.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists