lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.1103010137400.7942@uplift.swm.pp.se>
Date:	Tue, 1 Mar 2011 01:46:51 +0100 (CET)
From:	Mikael Abrahamsson <swmike@....pp.se>
To:	John Heffner <johnwheffner@...il.com>
cc:	Bill Sommerfeld <wsommerfeld@...gle.com>,
	Hagen Paul Pfeifer <hagen@...u.net>,
	Albert Cahalan <acahalan@...il.com>,
	Jussi Kivilinna <jussi.kivilinna@...et.fi>,
	Eric Dumazet <eric.dumazet@...il.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	netdev@...r.kernel.org
Subject: Re: txqueuelen has wrong units; should be time

On Mon, 28 Feb 2011, John Heffner wrote:

> Right... while I generally agree that a fixed-length drop-tail queue 
> isn't optimal, isn't this problem what the various AQM schemes try to 
> solve?

I am not an expert on exactly how Linux does this, but for Cisco and for 
instance ATM interfaces, there are two stages of queuing. One is the 
"hardware queue", which is a FIFO queue going into the ATM framer. If one 
wants low CPU usage, then this needs to be high so multiple packets can be 
put there per interrupt. Since AQM is working before this, it also means 
the low-latency-queue will have a higher latency as it ends up behind 
larger packets in the hw queue.

So on what level does the AQM work in Linux? Does it work similarily, that 
txqueuelen is a FIFO queue to the hardware that AQM feeds packets into?

Also, when one uses WRED the thinking is generally to keep the average 
queue len down, but still allow for bursts by dynamically changing the 
drop probability and where it happens. When there is no queuing, allow for 
big queue (so it can fill up if needed), but if the queue is large for 
several seconds, start to apply WRED to bring it down.

There is generally no need at all to constantly buffer > 50 ms of data, 
then it's better to just start selectively dropping it. In time of 
burstyness (perhaps when re-routing traffic) there is need to buffer 
200-500ms of during perhaps 1-2 seconds before things stabilize.

So one queuing scheme and one queue limit isn't going to solve this, there 
need to be some dynamic built into the system for it to work well.

AQM needs to feed into a relatively short hw queue and AQM needs to exist 
on output also when the traffic is sourced from the box itself, no tonly 
routed. It would also help if the default would be to use let's say 25% of 
the bandwidth for smaller packets (< 200 bytes or so) which generally are 
for interactive uses or are ACKs.

-- 
Mikael Abrahamsson    email: swmike@....pp.se
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ