lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1605160127430.1574@nftneq.ynat.uz>
Date:	Mon, 16 May 2016 01:46:25 -0700 (PDT)
From:	David Lang <david@...g.hm>
To:	Roman Yeryomin <leroi.lists@...il.com>
cc:	Dave Taht <dave.taht@...il.com>,
	make-wifi-fast@...ts.bufferbloat.net,
	Rafał Miłecki <zajec5@...il.com>,
	ath10k <ath10k@...ts.infradead.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"codel@...ts.bufferbloat.net" <codel@...ts.bufferbloat.net>,
	OpenWrt Development List <openwrt-devel@...ts.openwrt.org>,
	Felix Fietkau <nbd@....name>
Subject: Re: [Make-wifi-fast] OpenWRT wrong adjustment of fq_codel defaults
 (Was: [Codel] fq_codel_drop vs a udp flood)

On Mon, 16 May 2016, Roman Yeryomin wrote:

> On 16 May 2016 at 11:12, David Lang <david@...g.hm> wrote:
>> On Mon, 16 May 2016, Roman Yeryomin wrote:
>>
>>> On 6 May 2016 at 22:43, Dave Taht <dave.taht@...il.com> wrote:
>>>>
>>>> On Fri, May 6, 2016 at 11:56 AM, Roman Yeryomin <leroi.lists@...il.com>
>>>> wrote:
>>>>>
>>>>> On 6 May 2016 at 21:43, Roman Yeryomin <leroi.lists@...il.com> wrote:
>>>>>>
>>>>>> On 6 May 2016 at 15:47, Jesper Dangaard Brouer <brouer@...hat.com>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>
>>>> That is too low a limit, also, for normal use. And:
>>>> for the purpose of this particular UDP test, flows 16 is ok, but not
>>>> ideal.
>>>
>>>
>>> I played with different combinations, it doesn't make any
>>> (significant) difference: 20-30Mbps, not more.
>>> What numbers would you propose?
>>
>>
>> How many different flows did you have going at once? I believe that the
>> reason for higher numbers isn't for throughput, but to allow for more flows
>> to be isolated from each other. If you have too few buckets, different flows
>> will end up being combined into one bucket so that one will affect the other
>> more.
>
> I'm testing with one flow, I never saw bigger performance with more
> flows (e.g. -P8 to iperf3).

The issue isn't performance, it's isolating a DNS request from a VoIP flow 
from a streaming video flow from a DVD image download.

The question is how many buckets do you need to have to isolate these in 
practice? it depends how many flows you have. The default was 1024 buckets, but 
got changed to 128 for low memory devices, and that lower value got made into 
the default, even for devices with lots of memory.

I'm wondering if instead of trying to size this based on device memory, can it 
be resizable on the fly and grow if too many flows/collisions are detected?

David Lang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ