[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54B590FB.5040805@dev.mellanox.co.il>
Date: Tue, 13 Jan 2015 23:41:15 +0200
From: Eyal Perry <eyalpe@....mellanox.co.il>
To: Or Gerlitz <gerlitz.or@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>
CC: Linux Netdev List <netdev@...r.kernel.org>,
Amir Vadai <amirv@...lanox.com>,
Yevgeny Petrilin <yevgenyp@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>,
Ido Shamay <idos@...lanox.com>,
Amir Ancel <amira@...lanox.com>,
Eyal Perry <eyalpe@...lanox.com>
Subject: Re: BW regression after "tcp: refine TSO autosizing"
On 1/13/2015 22:21 PM, Or Gerlitz wrote:
> On Tue, Jan 13, 2015 at 8:57 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> On Tue, 2015-01-13 at 18:48 +0200, Eyal Perry wrote:
>>> Hello Eric,
>>> Lately we've observed performance degradation in BW of about 30-40% (depends on
>>> the setup we use).
>>> I've bisected the issue down to the this commit: 605ad7f1 ("tcp: refine TSO
>>> autosizing")
>>>
>>> For instance, I was running the following test:
>>> 1. Bounding net device' irqs to core 0 for both client and server side
>>> 2. Running netperf with 64K massage size (used the following command)
>>> $ netperf -H remote -T 1,1 -l 100 -t TCP_STREAM -- -k THROUGHPUT -M 65536 -m 65536
>>>
>>> I ran the test on upstream net-next including your patch and than reverted it
>>> and these are the results I got was improvement from 14.6Gbps to 22.1Gbps.
>>>
>>> an additional difference I've noticed when inspecting the ethtool statics,
>>> number of xmit_more packets increased from 4 to 160 with the reverted kernel.
>>>
>>> We are investigating this issue, do you have a hint?
>> Which driver are you using for this test ?
> AFAIK, mlx4
Oops, forgot to mention.
mlx4 indeed.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists