lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Jul 2012 01:38:35 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Rick Jones <rick.jones2@...com>
Cc:	David Miller <davem@...emloft.net>, ycheng@...gle.com,
	dave.taht@...il.com, netdev@...r.kernel.org,
	codel@...ts.bufferbloat.net, therbert@...gle.com,
	mattmathis@...gle.com, nanditad@...gle.com, ncardwell@...gle.com,
	andrewmcgr@...il.com
Subject: Re: [RFC PATCH v2] tcp: TCP Small Queues

On Wed, 2012-07-11 at 11:23 -0700, Rick Jones wrote:
> On 07/11/2012 08:11 AM, Eric Dumazet wrote:
> >
> >
> > Tests using a single TCP flow.
> >
> > Tests on 10Gbit links :
> >
> >
> > echo 16384 >/proc/sys/net/ipv4/tcp_limit_output_bytes
> > OMNI Send TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.99.2 (192.168.99.2) port 0 AF_INET
> > tcpi_rto 201000 tcpi_ato 0 tcpi_pmtu 1500 tcpi_rcv_ssthresh 14600
> > tcpi_rtt 1875 tcpi_rttvar 750 tcpi_snd_ssthresh 16 tpci_snd_cwnd 79
> > tcpi_reordering 53 tcpi_total_retrans 0
> 
> I take it you hacked your local copy of netperf to emit those?  Or did I 
> leave some cruft behind in something I committed to the repository?
> 
Yep, its a netperf-2.5.0 with a one line change to output these TCP_INFO
bits

> What was the ultimate limiter on throughput?  I notice it didn't achieve 
> link-rate on either 10 GbE nor 1 GbE.
> 

My lab has one fast machine (source in this 10Gb test), and one slow
machine (Intel Q6600 quad core), both with ixgbe cards.

On Gigabit test, the receiver is a laptop.


>  > Thats the plan : limiting numer of bytes in Qdisc, not number of bytes
>  > in socket write queue.
> 
> So the SO_SNDBUF can still grow rather larger than necessary?  It is 
> just that TCP will be nice to the other flows by not dumping all of it 
> into the qdisc at once.  Latency seen by the application itself is then 
> unchanged since there will still be (potentially) as much stuff queued 
> in the SO_SNDBUF as before right?

Of course SO_SNDBUF can grow if autotuning is enabled.

I think there is a bit of misunderstanding about this patch and what it
does.

It only makes sure the amount of packets (from socket write queue) are
cloned in qdisc/device queue in a limited way, not "as much as allowed"



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ