[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <552F9F60.7090406@eu.citrix.com>
Date: Thu, 16 Apr 2015 12:39:12 +0100
From: George Dunlap <george.dunlap@...citrix.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Stefano Stabellini <stefano.stabellini@...citrix.com>
CC: Jonathan Davies <Jonathan.Davies@...rix.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
Wei Liu <wei.liu2@...rix.com>,
Ian Campbell <Ian.Campbell@...rix.com>,
netdev <netdev@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
"Paul Durrant" <paul.durrant@...rix.com>,
Christoffer Dall <christoffer.dall@...aro.org>,
Felipe Franciosi <felipe.franciosi@...rix.com>,
<linux-arm-kernel@...ts.infradead.org>,
"David Vrabel" <david.vrabel@...rix.com>
Subject: Re: [Xen-devel] "tcp: refine TSO autosizing" causes performance regression
on Xen
On 04/15/2015 07:17 PM, Eric Dumazet wrote:
> Do not expect me to fight bufferbloat alone. Be part of the challenge,
> instead of trying to get back to proven bad solutions.
I tried that. I wrote a description of what I thought the situation
was, so that you could correct me if my understanding was wrong, and
then what I thought we could do about it. You apparently didn't even
read it, but just pointed me to a single cryptic comment that doesn't
give me enough information to actually figure out what the situation is.
We all agree that bufferbloat is a problem for everybody, and I can
definitely understand the desire to actually make the situation better
rather than dying the death of a thousand exceptions.
If you want help fighting bufferbloat, you have to educate people to
help you; or alternately, if you don't want to bother educating people,
you have to fight it alone -- or lose the battle due to having a
thousand exceptions.
So, back to TSQ limits. What's so magical about 2 packets being *in the
device itself*? And what does 1ms, or 2*64k packets (the default for
tcp_limit_output_bytes), have anything to do with it?
Your comment lists three benefits:
1. better RTT estimation
2. faster recovery
3. high rates
#3 is just marketing fluff; it's also contradicted by the statement that
immediately follows it -- i.e., there are drivers for which the
limitation does *not* give high rates.
#1, as far as I can tell, has to do with measuring the *actual* minimal
round trip time of an empty pipe, rather than the round trip time you
get when there's 512MB of packets in the device buffer. If a device has
a large internal buffer, then having a large number of packets
outstanding means that the measured RTT is skewed.
The goal here, I take it, is to have this "pipe" *exactly* full; having
it significantly more than "full" is what leads to bufferbloat.
#2 sounds like you're saying that if there are too many packets
outstanding when you discover that you need to adjust things, that it
takes a long time for your changes to have an effect; i.e., if you have
5ms of data in the pipe, it will take at least 5ms for your reduced
transmmission rate to actually have an effect.
Is that accurate, or have I misunderstood something?
-George
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists