lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 24 Apr 2008 15:39:09 -0700
From:	"John Heffner" <johnwheffner@...il.com>
To:	"Andi Kleen" <andi@...stfloor.org>
Cc:	"David Miller" <davem@...emloft.net>, rick.jones2@...com,
	netdev@...r.kernel.org
Subject: Re: Socket buffer sizes with autotuning

On Thu, Apr 24, 2008 at 3:21 PM, Andi Kleen <andi@...stfloor.org> wrote:
> David Miller <davem@...emloft.net> writes:
>
>  >> What is your interface txqueuelen and mtu?  If you have a very large
>  >> interface queue, TCP will happily fill it up unless you are using a
>  >> delay-based congestion controller.
>  >
>  > Yes, that's the fundamental problem with loss based congestion
>  > control.  If there are any queues in the path, TCP will fill them up.
>
>  That just means Linux does too much queueing by default.  Perhaps that
>  should be fixed. On Ethernet hardware the NIC TX queue should be
>  usually sufficient anyways I would guess. Do we really need the long
>  qdisc queue too?

The default on most ethernet devices used to be 100 packets.  This is
pretty short when your flow's window size is thousands of packets.
It's especially a killer for large BDP flows because it causes
slow-start to cut out early and you have to ramp up slowly with
congestion avoidance.  (This effect it mitigated somewhat by cubic,
etc, or even better by limited slow-start, but it's still very
significant.)

I have in the past used a hack that suppressed congestion notification
when overflowing the local interface queue.  This is a very simple
approach, and works fairly well, but doesn't have the property of
converging toward fairness if multiple flows are competing for
bandwidth at the local interface.  It might be possible to cook
something up that's smarter.

  -John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ