lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 11 Jul 2008 11:02:08 -0400
From:	Jim Rees <>
Subject: Autotuning and send buffer size

Bill Fink and others have mentioned that tcp buffer size autotuning can
cause a 5% or so performance penalty.  I looked into this a bit, and it
appears that if you set the sender's socket buffer too big, performance

Consider this, on a 1Gbps link with ~.1msec delay (12KB bdp):

Fixed 128KB sender socket buffer:
nuttcp -i1 -w128k pdsi5
 1115.4375 MB /  10.00 sec =  935.2707 Mbps 4 %TX 11 %RX

Fixed 8MB sender socket buffer:
nuttcp -i1 -w8m pdsi5
 1063.0625 MB /  10.10 sec =  882.7833 Mbps 4 %TX 15 %RX

Autotuned sender socket buffer:
nuttcp -i1 pdsi5
 1056.9375 MB /  10.04 sec =  883.1083 Mbps 4 %TX 15 %RX

I don't undestand how a "too big" sender buffer can hurt performance.  I
have not measured what size the sender's buffer is in the autotuning case.

Yes, I know "nuttcp -w" also sets the receiver's socket buffer size.  I
tried various upper limits on the receiver's buffer size via
net.ipv4.tcp_rmem but that doesn't seem to matter as long as it's big

nuttcp -i1 pdsi5
sender wmem_max=131071, receiver rmem_max=15728640
 1116.9375 MB /  10.01 sec =  936.4816 Mbps 3 %TX 16 %RX
sender wmem_max=15728640, receiver rmem_max=15728640
 1062.8750 MB /  10.10 sec =  882.6013 Mbps 4 %TX 15 %RX
sender wmem_max=15728640, receiver rmem_max=131071
 1060.2500 MB /  10.07 sec =  883.2847 Mbps 4 %TX 15 %RX
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to
More majordomo info at

Powered by blists - more mailing lists