lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Aug 2011 09:44:56 +0800
From:	Adrian Chadd <adrian@...ebsd.org>
To:	Jim Gettys <jg@...edesktop.org>
Cc:	"Luis R. Rodriguez" <mcgrof@...il.com>,
	Dave Taht <dave.taht@...il.com>,
	Tom Herbert <therbert@...gle.com>,
	linux-wireless <linux-wireless@...r.kernel.org>,
	Andrew McGregor <andrewmcgr@...il.com>,
	Matt Smith <smithm@....qualcomm.com>,
	Kevin Hayes <hayes@....qualcomm.com>,
	Derek Smithies <derek@...ranet.co.nz>, netdev@...r.kernel.org
Subject: Re: BQL crap and wireless

On 30 August 2011 09:22, Jim Gettys <jg@...edesktop.org> wrote:

Note: I'm knee deep in the aggregation TX/RX path at the present time
- I'm porting the atheros 802.11n TX aggregation code to FreeBSD.

> Computing the buffering in bytes is better than in packets; but since on
> wireless multicast/broadcast is transmitted at a radically different
> rate than other packets, I expect something based on time is really the
> long term solution; and only the driver has any idea how long a packet
> of a given flavour will likely take to transmit.

And the driver (hopefully!) can find out how long the packet -did-
take to transmit.

There are a bunch of different reasons for why the packet isn't
transmitting or why it can take so long. If (say) an aggregate has 10
hardware (long, short) retries at a high MCS rate, and then 10
software retries, that's up to 100 attempts at transmitting the
sub-frames in some way. It may also involve 10 attempts at an RTS
exchange. But it may also be 10 attempts at transmitting the -whole-
frame. In the case of a long aggregate (say the upper bounds of 4ms,
easily achievable when lower MCS rates are selected), this can take a
long time.

I'm occasionally seeing this in my testing, where the block-ack isn't
seen by the sender. The whole aggregate frame is thus retransmitted in
its entirety. This causes occasional bumps in the testing latency. The
obvious solution is to not form such large aggregates at lower MCS
rates but even single events have an impact on latency.

I'm not at the point yet where I can start tinkering with rate control
and queue management in this way but the idea of asking the rate
control code to manage per-node and overall airtime has crossed my
mind. Ie, the rate control code can see how successful transmissions
are to a given node (at given streams, rates, antenna configurations,
etc) and then enforce aggregate size and retransmission limits there.
Since a decision for any given node will affect the latency on all
subsequent nodes, it makes sense for the rate control code to keep a
global idea of the airtime involved as well as the (current) per-node
logic.

2c, which'll be more when I work the 11n TX A-MPDU kinks out in the
FreeBSD driver,


Adrian
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ