[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-id: <2860327B-6E7F-435F-AB4A-36E66A49F7DE@nuim.ie>
Date:	Fri, 28 Nov 2008 11:09:54 +0000
From:	Douglas Leith <Doug.Leith@...m.ie>
To:	Netdev <netdev@...r.kernel.org>
Cc:	David Miller <davem@...emloft.net>,
	Stephen Hemminger <shemminger@...tta.com>
Subject: Re: [RFC] tcp: make H-TCP the default congestion control
A bit of delayed input to this thread on netdev ...
> I'm not so sure about this logic, regardless of the algorithms
> involved.
>
> H-TCP was never the default in any distribution or release that
> I know of.  So it's real world exposure is effectively zero,
> which is the same as the new CUBIC stuff.
> They are effectively, therefore, equivalent choices.
Not really.  At this stage HTCP has undergone quite extensive  
independent testing by a good few groups (Caltech, Swinburne, North  
Carolina etc).  Its also been independently implemented in FreeBSD by  
the Swinburne folks.  Its true it hasn't been default in linux, but  
it HTCP  been subject to *far* more testing than the new cubic  
algorithm which has had no independent testing at all to my knowledge.
I'd also like to add some new input to the discussion on choice of  
congestion control algorithm in linux - and why it might be useful to  
evaluate alternatives like htcp.   Almost all of the proposals for  
changes to tcp (including cubic) have really slow convergence to  
fairness when new flows start up.   The question is then whether it  
matters e.g. whether it negatively impacts users.
To try to get a handle on this, we took one set of measurements from  
a home DSL line over a live link (so hopefully representative of  
common user experience), the other from a the production link out of  
the hamilton institute (so maybe more like the experience of  
enterprise users).   Plots of our measurements are at
http://www.hamilton.ie/doug/tina2.eps  (DSL link)
http://www.hamilton.ie/doug/caltech.eps  (hamilton link)
and also attached.
We started one long-ish flow (mimicking incumbent flows) and then  
started a second shorter flow.  The plots show the completion time of  
the second flow vs its connection size.  If the incumbent flow is  
slow to release bandwidth (as we expect with cubic), we expect the  
completion time of the second flow to increase, and indeed this is  
what we see.
What's particularly interesting is (i) the magnitude of the  
difference - completion times are consistently x2 with cubic vs htcp  
over many tests and (ii) that this effect is apparent not only on  
higher speed links (caltech.eps) but also on regular DSL links  
(tina2.eps - we took measurements from a home DSL line, so its not a  
sanitised lab setup or anything like that).
As might be expected, the difference in completion times eventually  
washes out for long transfers, e.g for the DSL link the most  
pronounced difference is for 1MB connections (where there is about a  
x2 difference in times between cubic and htcp) but becomes less for  
longer flows.  The point is that most real flows are short however,  
so the performance with a 1MB size flow seems like it should be more  
important than the 10MB size performance.  For me the DSL performance  
is the more important one here since it affects so many people, and  
was quite surprising, although I can also reproduce similar results  
on our testbed so its not a weird corner case or anything like that.
Wouldn't it be interesting to give h-tcp a go in linux to get wider  
feedback ?
Doug
Download attachment "caltech.eps" of type "application/postscript" (12173 bytes)
Download attachment "tina2.eps" of type "application/postscript" (16659 bytes)
Powered by blists - more mailing lists
 
