lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F95D020.9080500@hp.com>
Date:	Mon, 23 Apr 2012 14:56:48 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	therbert@...gle.com, ncardwell@...gle.com, maze@...gle.com,
	ycheng@...gle.com, ilpo.jarvinen@...sinki.fi
Subject: Re: [PATCH 2/2 net-next] tcp: sk_add_backlog() is too agressive for
 TCP

On 04/23/2012 02:51 PM, Rick Jones wrote:
> On 04/23/2012 02:30 PM, Eric Dumazet wrote:
>> On Mon, 2012-04-23 at 13:57 -0700, Rick Jones wrote:
>>> Probably better to call that something other than 16K buffers - the send
>>> size was probably 16K, which reflected SO_SNDBUF at the time the data
>>> socket was created, but clearly SO_SNDBUF grew in that timeframe.
>>>
>>
>>
>> Maybe I was not clear : Application does sendmsg() of 16KB buffers.
>
> I'd probably call that a 16K send test. The root of the issue being
> there being "send buffers" and "send socket buffers" (and their receive
> versions).
>
> My "canonical" test - at least one that appears in most of my
> contemporary scripts uses a 64K send size for the bulk transfer tests. I
> switch back-and-forth between tests which allow the socket buffer size
> to be determined automagically, and those where I set both sides' socket
> buffers to 1M via the test-specific -s and -S options. In "netperf
> speak" those would probably be "x64K" and "1Mx64k" respectively. More
> generally "<socket buffer size>x<send size>" (I rarely set/specify the
> receive size in those tests, leaving it at whatever SO_RCVBUF is at the
> start.
>
>> Yet, in the small time it takes to perform this operation, softirq can
>> queue up to 300 packets coming from the other side.
>
> There is more to it than just queue-up 16 KB right?

I should have added that 300 ACKs seems huge as a backlog.  At 
ack-every-other that is 300 * 1448 * 2 or 868800 bytes worth of ACKs. 
That sounds like a great deal more than just one 16KB send's worth of 
being held-off.  I mean at 10Gbe speeds (using your 54 usec for 64KB) 
that represents data which took something like three quarters of a 
millisecond to transmit on the wire.

rick
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ