lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Apr 2008 10:41:45 -0700
From:	"John Heffner" <johnwheffner@...il.com>
To:	"Rick Jones" <rick.jones2@...com>
Cc:	"David Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: Socket buffer sizes with autotuning

On Wed, Apr 23, 2008 at 10:24 AM, Rick Jones <rick.jones2@...com> wrote:
> John Heffner wrote:
> > Receive-side autotuning by design will attempt to grow the rcvbuf
> > (adjusting for overhead) to twice the observed cwnd[1].  When the
> > sender keeps growing its window to fill up your interface queue, the
> > receiver will continue to grow its window to let the sender do what it
> > wants.  It's not the receiver's job to do congestion control.
> >
>
>  Then why is it starting small and growing the advertised window?

Because the receiver tracks the sender's behavior.  (Also because of
the algorithm for figuring out the buffer overhead ratio, but that's a
different story.)


>  What concerns me is that we tell folks to rely on autotuning because it
> will set their socket buffers to the size they need, but it seems to be
> setting their socket buffers to very much more than they need.  Given that
> netperf couldn't have been receiving any more data in an RTT if it was
> always going at ~940 Mbit/s I'm still perplexed unless there is a positive
> feedback here - allow a greater window, this allows more data to be queued,
> which increases the RTT, which allows more data to be recevied in an "RTT"
> which allows a greater window...

It's true that careful hand-tuning can do better than autotuning in
some cases, due to the sub-optimality of congestion control.  This is
one such case, where you have an overbuffered bottleneck.  Another
case is where you have an underbuffered bottleneck, especially with a
large BDP.  You can calculate exactly the window size you need to fill
the bottleneck and set your send or receive buffer to throttle you to
exactly that window, so that you will get 100% utilization.  If you
let autotuning work, it will keep increasing your buffer as congestion
control asks for more until you overflow the buffer and take a loss.
Then, congestion control will have to back off, likely resulting in
under-utilization.

In these cases, hand-tuning can do better because you have out-of-band
knowledge of the path.  Also, smarter congestion control algorithms
can do better than traditional AIMD, but it's a very hard problem to
do this well in general.  Autotuning's job is mainly just to grow the
buffer sizes to as much as congestion control needs to do its thing.

  -John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ