[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FFDC48D.2030606@hp.com>
Date: Wed, 11 Jul 2012 11:23:09 -0700
From: Rick Jones <rick.jones2@...com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: David Miller <davem@...emloft.net>, ycheng@...gle.com,
dave.taht@...il.com, netdev@...r.kernel.org,
codel@...ts.bufferbloat.net, therbert@...gle.com,
mattmathis@...gle.com, nanditad@...gle.com, ncardwell@...gle.com,
andrewmcgr@...il.com
Subject: Re: [RFC PATCH v2] tcp: TCP Small Queues
On 07/11/2012 08:11 AM, Eric Dumazet wrote:
>
>
> Tests using a single TCP flow.
>
> Tests on 10Gbit links :
>
>
> echo 16384 >/proc/sys/net/ipv4/tcp_limit_output_bytes
> OMNI Send TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.99.2 (192.168.99.2) port 0 AF_INET
> tcpi_rto 201000 tcpi_ato 0 tcpi_pmtu 1500 tcpi_rcv_ssthresh 14600
> tcpi_rtt 1875 tcpi_rttvar 750 tcpi_snd_ssthresh 16 tpci_snd_cwnd 79
> tcpi_reordering 53 tcpi_total_retrans 0
I take it you hacked your local copy of netperf to emit those? Or did I
leave some cruft behind in something I committed to the repository?
What was the ultimate limiter on throughput? I notice it didn't achieve
link-rate on either 10 GbE nor 1 GbE.
> Thats the plan : limiting numer of bytes in Qdisc, not number of bytes
> in socket write queue.
So the SO_SNDBUF can still grow rather larger than necessary? It is
just that TCP will be nice to the other flows by not dumping all of it
into the qdisc at once. Latency seen by the application itself is then
unchanged since there will still be (potentially) as much stuff queued
in the SO_SNDBUF as before right?
rick
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists