lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 May 2016 07:23:02 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Roman Yeryomin <leroi.lists@...il.com>
Cc:	Rajkumar Manoharan <rmanohar@...eaurora.org>,
	Michal Kazior <michal.kazior@...to.com>,
	make-wifi-fast@...ts.bufferbloat.net,
	Rafał Miłecki <zajec5@...il.com>,
	ath10k <ath10k@...ts.infradead.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"codel@...ts.bufferbloat.net" <codel@...ts.bufferbloat.net>,
	OpenWrt Development List <openwrt-devel@...ts.openwrt.org>,
	Felix Fietkau <nbd@....name>
Subject: Re: [Make-wifi-fast] OpenWRT wrong adjustment of fq_codel defaults
 (Was: [Codel] fq_codel_drop vs a udp flood)

On Mon, 2016-05-16 at 11:14 +0300, Roman Yeryomin wrote:

> So, very close to "as before": 900Mbps UDP, 750 TCP.
> But still, I was expecting performance improvements from latest ath10k
> code, not regressions.
> I know that hw is capable of 800Mbps TCP, which I'm targeting.

One flow can reach 800Mbps.

To get this, a simple pfifo is enough.

But _if_ you also want to get decent results with hundreds of flows
under stress, you need something else, and I do not see how 'something'
else would come for free.

You will see some 'regressions' because of additional cpu costs, unless
you have enough cpu cycles and KB of memory to burn for free.

If your goal is to get max throughput on a single TCP flow, in a clean
env an cheap hardware, you absolutely should stick to pfifo. Nothing
could beat pfifo (well, pfifo could be improved using lockless
implementation, but that would matter if you have different cpus
queueing and dequeueing packets)

But I guess your issues mostly come from a too small packet limits, or
to big TCP windows.

Basically, if you test a single TCP flow, fq_codel should behave like a
pfifo, unless maybe your kernel has a very slow ktime_get_ns()
implementation [1]

If you set a limit of 1024 packets on pfifo, you'll have the same amount
of drops and lower TCP throughput.

[1] We probably should have a self-test to have an estimation of
ktime_get_ns() cost



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ