lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.20.1504081844390.6502@nerf40.vanv.qr>
Date:	Wed, 8 Apr 2015 19:09:26 +0200 (CEST)
From:	Jan Engelhardt <jengelh@...i.de>
To:	Eric Dumazet <eric.dumazet@...il.com>
cc:	Linux Networking Developer Mailing List <netdev@...r.kernel.org>
Subject: Re: TSO on veth device slows transmission to a crawl


On Tuesday 2015-04-07 21:49, Eric Dumazet wrote:
>> On Tuesday 2015-04-07 04:48, Eric Dumazet wrote:
>> >On Tue, 2015-04-07 at 00:45 +0200, Jan Engelhardt wrote:
>> >> I have here a Linux 3.19(.0) system where activated TSO on a veth slave 
[and now also 3.19.3]
>> >> device makes IPv4-TCP transfers going into that veth-connected container 
>> >> progress slowly.
>> >
>> >Nothing comes to mind. It would help if you could provide a script to
>> >reproduce the issue.
>> 
>> It seems IPsec is *also* a requirement in the mix.
>> Anyhow, script time!
>
>I tried your scripts, but the sender does not use veth ?

I was finally able to reproduce it reliably, and with just one 
machine/kernel instance (and a number of containers of course).

I have uploaded the script mix to
	http://inai.de/files/tso-1.tar.xz
There is a demo screencast at
	http://inai.de/files/tso-1.mkv

>From the script collection, one can run t-all-init to setup the
network, then t-chargen-server (stays in foreground), and on another
terminal then t-chargen-client. Alternatively, the combination
t-zero-{server,client}. It appears that it has to a simple
single-threaded, single-connection, single-everything transfer.

The problem won't manifest with netperf even if run for extended
period (60 seconds). netperf is probably too smart, exploiting some
form of parallelism.

Oh, and if the transfer rate is absolutely *zero* (esp. for 
xinetd-chargen), that just means that it attempts DNS name resolution. 
Just wait a few seconds in that case for the transfer to start.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ