lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4FFF015A.5010809@freedesktop.org>
Date:	Thu, 12 Jul 2012 12:54:50 -0400
From:	Jim Gettys <jg@...edesktop.org>
To:	John Heffner <johnwheffner@...il.com>
CC:	Eric Dumazet <eric.dumazet@...il.com>, nanditad@...gle.com,
	netdev@...r.kernel.org, mattmathis@...gle.com,
	codel@...ts.bufferbloat.net, ncardwell@...gle.com,
	David Miller <davem@...emloft.net>
Subject: Re: [Codel] [RFC PATCH v2] tcp: TCP Small Queues

On 07/12/2012 12:44 PM, John Heffner wrote:
> On Thu, Jul 12, 2012 at 9:46 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> On Thu, 2012-07-12 at 09:33 -0400, John Heffner wrote:
>>> One general question: why a per-connection limit?  I haven't been
>>> following the bufferbloat conversation closely so I may have missed
>>> some of the conversation.  But it seems that multiple connections will
>>> still cause longer queue times.
>> We already have a per-device limit, in qdisc.
>>
>> If you want to monitor several tcp sessions, I urge you use a controller
>> for that. Like codel or fq_codel.
>>
>> Experiments show that limiting to two TSO packets in qdisc per tcp flow
>> is enough to stop insane qdisc queueing, without impact on throughput
>> for people wanting fast tcp sessions.
>>
>> Thats not solving the more general problem of having 1000 competing
>> flows.
> Right, AQM (and probably some modifications to the congestion control)
> is the more general solution.
>
> I guess I'm just trying to justify in my mind that the case of a small
> number of local connections is worth handling in this special way.  It
> seems like a generally reasonable thing, but it's definitely not a
> general solution to minimizing latency.  One thing worth noting: on a
> system routing traffic, local connections may be at a disadvantage
> relative to connections being forwarded, sharing the same interface
> queue, if that queue is the bottleneck.

Kathy simulated CoDel across a pretty wide range of RTT's seen at the
edge of the network, and things behave pretty well.  She did say she
needed to think more and simulate the data center cases; haven't had a
chance to chat with her about that.  Of course, you can do some
experiments pretty easily yourself, and we'd love to see whatever
results you get.
                    - Jim


>
> Architecturally, the inconsistency between a local queue and a queue
> one hop away bothers me a bit, but it's something I can learn to live
> with if it really does improve a common case significantly. ;-)
>
> Thanks,
>   -John
> _______________________________________________
> Codel mailing list
> Codel@...ts.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/codel


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ