lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 31 Aug 2011 09:28:14 -0400
From:	Jim Gettys <jg@...edesktop.org>
To:	Andrew McGregor <andrewmcgr@...il.com>
CC:	Adrian Chadd <adrian@...ebsd.org>,
	Tom Herbert <therbert@...gle.com>,
	"Luis R. Rodriguez" <mcgrof@...il.com>,
	Dave Taht <dave.taht@...il.com>,
	linux-wireless <linux-wireless@...r.kernel.org>,
	Matt Smith <smithm@....qualcomm.com>,
	Kevin Hayes <hayes@....qualcomm.com>,
	Derek Smithies <derek@...ranet.co.nz>, netdev@...r.kernel.org
Subject: Re: BQL crap and wireless

On 08/30/2011 05:47 PM, Andrew McGregor wrote:
> On 31/08/2011, at 1:58 AM, Jim Gettys wrote:
>
>> On 08/29/2011 11:42 PM, Adrian Chadd wrote:
>>> On 30 August 2011 11:34, Tom Herbert <therbert@...gle.com> wrote:
>>>
>>> C(P) is going to be quite variable - a full frame retransmit of a 4ms
>>> long aggregate frame is SUM(exponential backoff, grab the air,
>>> preamble, header, 4ms, etc. for each pass.)
>>>
>> It's not clear to me that doing heroic measures to compute the cost is
>> going to be worthwhile due to the rate at which the costs can change on
>> wireless; just getting into the rough ballpark may be enough. But
>> buffering algorithms and AQM algorithms are going to need an estimate of
>> the *time* it will take to transmit data, more than # of bytes or packets.
> That's not heroic measures; mac80211 needs all the code to calculate these times anyway, it's just a matter of collecting together some things we already know and calling the right function.
>
>

Fine; if it's easy, accurate is better (presuming the costs get
recalculated when circumstances change). We also will need the amount of
data being transmitted; it is the rate of transmission (the rate at
which the buffers are draining) that we'll likely need.

Here's what I've gleaned from reading "RED in a different light",  Van
Jacobson's Mitre talk and several conversations with Kathleen Nichols
and Van: AQM algorithms that can handle variable bandwidth environments
will need to be able to know the rate at which buffers empty.  It's the
direction they are going with their experiments on a "RED light" algorithm.

The fundamental insight as to why classic RED can't work in the wireless
environment is that the instantaneous queue length has little actual
information in it. Classic RED is tuned using the queue length as its
basic parameter.  Their belief as to algorithms that will work is that
the need to keep track of the running queue length *minimum over time*;
you want to keep the buffers small on a longer term basis (so they both
can absorb transients, which is their reason for existence, while
keeping the latency typically low).  The additional major challenge we
face that core routers do not is the highly variable traffic of mixed
mice and elephants.  What actually works only time will tell.

So in an environment in which the rate of transmission is highly
variable, such as wireless, or even possibly modern broadband with
PowerBoost, classic RED or similar algorithms that do not take the
buffer drain rate cannot possibly hack it properly.
                        - Jim


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ