lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c0fed5fc63a4c3488f5f08409f508ee@HKXPR30MB0039.064d.mgd.msft.net>
Date:	Mon, 9 Nov 2015 02:39:24 +0000
From:	Dexuan Cui <decui@...rosoft.com>
To:	Eric Dumazet <eric.dumazet@...il.com>,
	David Ahern <dsa@...ulusnetworks.com>,
	Simon Xiao <sixiao@...rosoft.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Haiyang Zhang <haiyangz@...rosoft.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	David Miller <davem@...emloft.net>
Subject: RE: linux-next network throughput performance regression

> From: devel [mailto:driverdev-devel-bounces@...uxdriverproject.org] On Behalf
> Of Eric Dumazet
> Sent: Sunday, November 8, 2015 3:36
> To: David Ahern <dsa@...ulusnetworks.com>
> Cc: netdev@...r.kernel.org; Haiyang Zhang <haiyangz@...rosoft.com>; linux-
> kernel@...r.kernel.org; devel@...uxdriverproject.org; David Miller
> <davem@...emloft.net>
> Subject: Re: linux-next network throughput performance regression
> 
> On Fri, 2015-11-06 at 14:30 -0700, David Ahern wrote:
> > On 11/6/15 2:18 PM, Simon Xiao wrote:
> > > The .config file used to build linux-next kernel is attached to this mail.
> >
> > Thanks.
> >
> > Failed to notice this on the first response; my brain filled in. Why
> > linux-next tree? Can you try net-next which is more relevant for this
> > mailing list, post the top commit id and config file used?
> 
> Throughput on a single TCP flow for a 40G NIC can be tricky to tune.
Why is a single TCP flow trickier than multiple TCP flows?
IMO it should be easier to analyze the issue of a single TCP flow?

Here the perf drop in Simon's test is very obvious -- 50%, but it looks Eric
can't reproduce it, so I suppose some net-related kernel config options may
do the magic?

Maybe Simon can narrow the regression down by bisecting. :-)
 
> Make sure IRQ are properly setup/balanced, as I know that IRQ names were
> changed recently and your scripts might have not noticed...
> 
> Also "ethtool -c eth0" might show very different interrupt coalescing
> params ?
> 
> I too have a Mellanox 40Gb in my lab and saw no difference in
> performance with recent kernels.
> 
> Of course, a simple "perf record -a -g sleep 4 ; perf report" might
> point to some obvious issue. Like unexpected segmentation in case of
> forwarding...
> 

Thanks,
-- Dexuan
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ