lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 23 Apr 2012 13:57:17 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	therbert@...gle.com, ncardwell@...gle.com, maze@...gle.com,
	ycheng@...gle.com, ilpo.jarvinen@...sinki.fi
Subject: Re: [PATCH 2/2 net-next] tcp: sk_add_backlog() is too agressive for
 TCP

On 04/23/2012 01:37 PM, Eric Dumazet wrote:
> In my 10Gbit tests (standard netperf using 16K buffers), I've seen
> backlogs of 300 ACK packets...

Probably better to call that something other than 16K buffers - the send 
size was probably 16K, which reflected SO_SNDBUF at the time the data 
socket was created, but clearly SO_SNDBUF grew in that timeframe.

And those values are "standard" for netperf only in the context of 
(default) Linux - on other platforms the defaults in the stack and so in 
netperf are probably different.

The classic/migrated classic tests report only the initial socket buffer 
sizes, not what they become by the end of the test:

raj@...dy:~/netperf2_trunk/src$ ./netperf -H 192.168.1.3
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.1.3 () port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

  87380  16384  16384    10.00     941.06

To see what they are at the end of the test requires more direct use of 
the omni path.  Either by way of test type:

raj@...dy:~/netperf2_trunk/src$ ./netperf -H 192.168.1.3 -t omni
OMNI Send TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.3 () 
port 0 AF_INET
Local       Remote      Local  Elapsed Throughput Throughput
Send Socket Recv Socket Send   Time               Units
Size        Size        Size   (sec)
Final       Final
266640      87380       16384  10.00   940.92     10^6bits/s

or omni output selection:

raj@...dy:~/netperf2_trunk/src$ ./netperf -H 192.168.1.3 -- -k 
lss_size_req,lss_size,lss_size_end,rsr_size_req,rsr_size,rsr_size_end
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.1.3 () port 0 AF_INET
LSS_SIZE_REQ=-1
LSS_SIZE=16384
LSS_SIZE_END=266640
RSR_SIZE_REQ=-1
RSR_SIZE=87380
RSR_SIZE_END=87380

BTW, does it make sense that the SO_SNDBUF size on the netperf side 
(lss_size_end - 2.6.38-14-generic kernel) grew larger than the SO_RCVBUF 
on the netserver side? (3.2.0-rc4+)

rick jones

PS - here is data flowing the other way:
raj@...dy:~/netperf2_trunk/src$ ./netperf -H 192.168.1.3 -t TCP_MAERTS 
-- -k lsr_size_req,lsr_size,lsr_size_end,rss_size_req,rss_size,rss_size_end
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.1.3 () port 0 AF_INET
LSR_SIZE_REQ=-1
LSR_SIZE=87380
LSR_SIZE_END=4194304
RSS_SIZE_REQ=-1
RSS_SIZE=16384
RSS_SIZE_END=65536

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ