lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+mtBx8tHJ1QkJWMSUVfFp_a4ymjsf7fA=wL+VQTJMKXmj0uuQ@mail.gmail.com>
Date:	Thu, 12 Jul 2012 07:55:33 -0700
From:	Tom Herbert <therbert@...gle.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	David Miller <davem@...emloft.net>, rick.jones2@...com,
	ycheng@...gle.com, dave.taht@...il.com, netdev@...r.kernel.org,
	codel@...ts.bufferbloat.net, mattmathis@...gle.com,
	nanditad@...gle.com, ncardwell@...gle.com, andrewmcgr@...il.com
Subject: Re: [RFC PATCH v2] tcp: TCP Small Queues

On Thu, Jul 12, 2012 at 12:51 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Thu, 2012-07-12 at 00:37 -0700, David Miller wrote:
>> From: Eric Dumazet <eric.dumazet@...il.com>
>> Date: Thu, 12 Jul 2012 09:34:19 +0200
>>
>> > On Thu, 2012-07-12 at 01:49 +0200, Eric Dumazet wrote:
>> >
>> >> The 10Gb receiver is a net-next kernel, but the 1Gb receiver is a 2.6.38
>> >> ubuntu kernel. They probably have very different TCP behavior.
>> >
>> >
>> > I tested TSQ on bnx2x and 10Gb links.
>> >
>> > I get full rate even using 65536 bytes for
>> > the /proc/sys/net/ipv4/tcp_limit_output_bytes tunable
>>
>> Great work Eric.
>
> Thanks !
>
This is indeed great work!  A couple of comments...

Do you know if there are are any qdiscs that function less efficiently
when we are restricting the number of packets?  For instance, will HTB
work as expected in various configurations?

One extension to this work be to make the limit dynamic and mostly
eliminate the tunable.  I'm thinking we might be able to correlate the
limit to the BQL limit of the egress queue for the flow it there is
one.

Assuming all work conserving qdiscs the minimal amount of outstanding
host data for a queue could be associated with the BQL limit of the
egress NIC queue.  We want to minimize the outstanding data so that:

sum(data_of_tcp_flows_share_same_queue) > bql_limit_for _queue

So this could imply a per flow limit of:

tcp_limit = max(bql_limit - bql_inflight, one_packet)

For a single active connection on a queue, the tcp_limit is equal to
the BQL limit.  Once the BQL limit is hit in the NIC, we only need one
packet outstanding per flow to maintain flow control.  For fairness,
we might need "one_packet" to actually be max GSO data.  Also, this
disregards any latency of scheduling and running the tasklet, that
might need to be taken into account also.

Tom
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ