lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FFDA3C0.9040909@candelatech.com>
Date:	Wed, 11 Jul 2012 09:03:12 -0700
From:	Ben Greear <greearb@...delatech.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	David Miller <davem@...emloft.net>, ycheng@...gle.com,
	dave.taht@...il.com, netdev@...r.kernel.org,
	codel@...ts.bufferbloat.net, therbert@...gle.com,
	mattmathis@...gle.com, nanditad@...gle.com, ncardwell@...gle.com,
	andrewmcgr@...il.com, Rick Jones <rick.jones2@...com>
Subject: Re: [RFC PATCH v2] tcp: TCP Small Queues

On 07/11/2012 08:54 AM, Eric Dumazet wrote:
> On Wed, 2012-07-11 at 08:43 -0700, Ben Greear wrote:
>> On 07/11/2012 08:25 AM, Eric Dumazet wrote:
>>> On Wed, 2012-07-11 at 08:16 -0700, Ben Greear wrote:
>>>
>>>> I haven't read your patch in detail, but I was wondering if this feature
>>>> would cause trouble for applications that are servicing many sockets at once
>>>> and so might take several ms between handling each individual socket.
>>>>
>>>
>>> Well, this patch has no impact for such applications. In fact their
>>> send()/write() will return to userland faster than before (for very
>>> large send())
>>
>> Maybe I'm just confused.  Is your patch just mucking with
>> the queues below the tcp xmit queues?  From the patch description
>> I was thinking you were somehow directly limiting the TCP xmit
>> queues...
>>
>
> I dont limit tcp xmit queues. I might avoid excessive autotuning.
>
>
>
>> If you are just draining the tcp xmit queues on a new/faster
>> trigger, then I see no problem with that, and no need for
>> a per-socket control.
>
> Thats the plan : limiting numer of bytes in Qdisc, not number of bytes
> in socket write queue.

Thanks for the explanation.

Out of curiosity, have you tried running multiple TCP streams
with different processes driving each stream, where each is trying
to drive, say, 700Mbps bi-directional traffic over a 1Gbps link?

Perhaps with 50ms of latency generated by a network emulator.

This used to cause some extremely high latency
due to excessive TCP xmit queues (from what I could tell),
but maybe this new patch will cure that.

I'll re-run my tests with your patch eventually..but too bogged
down to do so soon.

Thanks,
Ben

-- 
Ben Greear <greearb@...delatech.com>
Candela Technologies Inc  http://www.candelatech.com



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ