lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAB=NE6XrkdiZcGEDGuYe=SwLBhTm=Mt4NaPzjV9j_W-8sVosOA@mail.gmail.com>
Date:	Mon, 29 Aug 2011 14:10:45 -0700
From:	"Luis R. Rodriguez" <mcgrof@...il.com>
To:	Tom Herbert <therbert@...gle.com>
Cc:	linux-wireless <linux-wireless@...r.kernel.org>,
	Andrew McGregor <andrewmcgr@...il.com>,
	Matt Smith <smithm@....qualcomm.com>,
	Kevin Hayes <hayes@....qualcomm.com>,
	Dave Taht <dave.taht@...il.com>,
	Derek Smithies <derek@...ranet.co.nz>, netdev@...r.kernel.org
Subject: Re: BQL crap and wireless

On Mon, Aug 29, 2011 at 2:02 PM, Luis R. Rodriguez <mcgrof@...il.com> wrote:
> Hope this helps sum up the issue for 802.11 and what we are faced with.

I should elaborate a bit more here on ensuring people understand that
the "bufferbloat" issue assumes simply not retrying frames is a good
thing. This is incorrect. TCP's congestion algorithm is designed to
help with the network conditions, not the dynamic PHY conditions. The
dyanmic PHY conditions are handled through a slew of different means:

  * Rate control
  * Adaptive Noise Immunity effort

Rate control is addressed either in firmware or by the driver.
Typically rate control algorithms use some sort of metrics to do best
guess at what rate a frame should be transmitted at. Minstrel was the
first to say -- ahhh the hell with it, I give up and simply do trial
and error and keep using the most reliable one but keep testing
different rates as you go on. You fixate on the best one by using
EWMA.

What I was arguing early was that perhaps the same approach can be
taken for the latency issues under the assumption the resolution is
queue size and software retries. In fact this same principle might be
able to be applicable to the aggregation segment size as well.

Now, ANI is specific to hardware and does adjustments on hardware
based on some known metrics. Today we have fixed thresholds for these
but I wouldn't be surprised if taking minstrel-like guesses and doing
trial and error and EWMA based fixation would help here as well.

  Luis
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ