lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM=9tzDS6zxDXqBeO+VcMTFm8OEZjysmTJbbEJpxWtodiVjhw@mail.gmail.com>
Date:	Mon, 9 Nov 2015 13:32:56 +1000
From:	Dave Airlie <airlied@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	decui@...rosoft.com, Eric Dumazet <eric.dumazet@...il.com>,
	dsa@...ulusnetworks.com, sixiao@...rosoft.com,
	Network Development <netdev@...r.kernel.org>,
	haiyangz@...rosoft.com, LKML <linux-kernel@...r.kernel.org>,
	devel@...uxdriverproject.org
Subject: Re: linux-next network throughput performance regression

On 9 November 2015 at 13:23, David Miller <davem@...emloft.net> wrote:
> From: Dexuan Cui <decui@...rosoft.com>
> Date: Mon, 9 Nov 2015 03:11:35 +0000
>
>>> -----Original Message-----
>>> From: David Miller [mailto:davem@...emloft.net]
>>> Sent: Monday, November 9, 2015 10:53
>>> To: Dexuan Cui <decui@...rosoft.com>
>>> Cc: eric.dumazet@...il.com; dsa@...ulusnetworks.com; Simon Xiao
>>> <sixiao@...rosoft.com>; netdev@...r.kernel.org; Haiyang Zhang
>>> <haiyangz@...rosoft.com>; linux-kernel@...r.kernel.org;
>>> devel@...uxdriverproject.org
>>> Subject: Re: linux-next network throughput performance regression
>>>
>>> From: Dexuan Cui <decui@...rosoft.com>
>>> Date: Mon, 9 Nov 2015 02:39:24 +0000
>>>
>>> >> Throughput on a single TCP flow for a 40G NIC can be tricky to tune.
>>> > Why is a single TCP flow trickier than multiple TCP flows?
>>> > IMO it should be easier to analyze the issue of a single TCP flow?
>>>
>>> Because a single TCP flow can only use one of the many TX queues
>>> that such modern NICs have.
>>>
>>> The single TX queue becomes the bottleneck.
>>>
>>> Whereas if you have several TCP flows, all of them can use independant
>>> TX queues on the NIC in parallel to fill the link with traffic.
>>>
>>> That's why.
>>
>> Thanks, David!
>> I understand 1 TX queue is the bottleneck (however in Simon's
>> test, TX=1 => 36.7Gb/s, TX=8 => 37.7 Gb/s, so it looks the TX=1 bottleneck
>> is not so obvious).
>> I'm just wondering how the bottleneck became much narrower with
>> recent linux-next in Simon's result (36.7 Gb/s vs. 18.2 Gb/s). IMO there
>> must be some latency somewhere.
>
> I think the whole thing here is that you misinterpreted what Eric said.
>
> He is not arguing that some regression did, or did not, happen.
>
> He instead was making the basic statement about the fact that due to
> the lack of paralellness a single stream TCP case is harder to
> optimize for high speed NICs.
>
> That is all.

We recently had a regression tracked down in a similiar area that was
because of link order.

Dave.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ