lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150526175540.GB13376@WorkStation.home>
Date:	Tue, 26 May 2015 13:55:40 -0400
From:	Ido Yariv <ido@...ery.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Alexey Kuznetsov <kuznet@....inr.ac.ru>,
	James Morris <jmorris@...ei.org>,
	Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
	Patrick McHardy <kaber@...sh.net>,
	Nandita Dukkipati <nanditad@...gle.com>,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	Ido Yariv <idox.yariv@...el.com>
Subject: Re: [PATCH] net: tcp: Fix a PTO timing granularity issue

Hi Eric,

On Tue, May 26, 2015 at 10:13:40AM -0700, Eric Dumazet wrote:
> On Tue, 2015-05-26 at 13:02 -0400, Ido Yariv wrote:
> > Hi Eric,
> > 
> > On Tue, May 26, 2015 at 09:23:55AM -0700, Eric Dumazet wrote:
> > > Have you really hit an issue, or did you send this patch after all these
> > > msecs_to_jiffies() discussions on lkml/netdev ?
> > 
> > This actually fixed a specific issue I ran into. This issue caused a
> > degradation in throughput in a benchmark which sent relatively small
> > chunks of data (100KB) in a loop. The impact was quite substantial -
> > with this patch, throughput increased by up to 50% on the platform this
> > was tested on.
> 
> 
> Really ? You have more problems if your benchmark relies on TLP.
> 
> Please share your setup, because I suspect you hit other more serious
> bugs.

The platform this was tested on was an embedded platform with a wifi
module (11n, 20MHZ). The other end was a computer running Windows, and
the benchmarking software was IxChariot.
The whole setup was running in a shielded box with minimal
interferences.

As it seems, the throughput was limited by the congestion window.
Further analysis led to TLP - the fact that its timer was expiring
prematurely impacted cwnd, which in turn prevented the wireless driver
from having enough skbs to buffer and send.

Increasing the size of the chunks being sent had a similar impact on
throughput, presumably because the congestion window had enough time to
increase.

Changing the congestion window to Westwood from cubic/reno also had a
similar impact on throughput.

> > This was actually the first incarnation of this patch. However, while
> > the impact of this issue when HZ=100 is the greatest, it can also impact
> > other settings as well. For instance, if HZ=250, the timer could expire
> > after a bit over 8ms instead of 10ms, and 9ms for HZ=1000.
> > 
> > By increasing the number of jiffies, we ensure that we'll wait at least
> > 10ms but never less than that, so for HZ=1000, it'll be anywhere between
> > 10ms and 11ms instead of 9ms and 10ms.
> 
> Yes, but we do not want to blindly increase this timeout, tested few
> years ago with this exact value : between 9 and 10 ms. Not between 10
> and 11 ms, with an added 10% in max latencies.

I understand, and I also suspect that having it expire after 9ms will
have very little impact, if at all.

Since it mainly affects HZ=100 systems, we can simply go with having at
least 2 jiffies on these systems, and leave everything else as is.

However, if the 10ms has a special meaning (couldn't find reasoning for
it in the RFC), making sure this timer doesn't expire prematurely could
be beneficial. I'm afraid this was not tested on the setup mentioned
above though.

Thanks,
Ido.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ