[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081204185022.6108d19e@gmail.com>
Date: Thu, 4 Dec 2008 18:50:22 +0100
From: Luca De Cicco <ldecicco@...il.com>
To: "Ilpo Järvinen" <ilpo.jarvinen@...sinki.fi>
Cc: Saverio Mascolo <saverio.mascolo@...il.com>,
Netdev <netdev@...r.kernel.org>
Subject: Re: TCP default congestion control in linux should be newreno
Dear Ilpo,
please find my reply in line.
On Thu, 4 Dec 2008 14:41:05 +0200 (EET)
"Ilpo Järvinen" <ilpo.jarvinen@...sinki.fi> wrote:
> On Wed, 3 Dec 2008, Saverio Mascolo wrote:
>
> > we have added plots of cwnd at
> >
> > http://c3lab.poliba.it/index.php/TCP_over_Hsdpa
> >
> > in the case of newreno, wetwood+, bic/cubic.
>
> You lack the most important detail, ie., the used kernel versions!
> And also information if some sysctls were tuned or not. This is
> especially important since you seem to claim that bic is the default
> which it hasn't been for years?!
>
Thank you for pointing out, we employed the kernel 2.6.24 with web100
patch in order to log the internal variables. You are right, cubic is
the default, that was simply a cut & paste error.
For what concerns the sysctls, they are set all to the default values,
with the only exception of tcp_no_metrics_save that is turned on in
order not to save metrics (such as ssthresh) as specified in [1].
The other sysctls were left as default in order to assess the
performance of the algorithms as a normal user would do.
> > basically the goodput is similar with all variants but with
> > significantly larger packet losses and timeouts with bic/cubic.
>
> I've never understood what exactly is wrong with the larger amount of
> packet losses if they happen before (or at the bottleneck), here
> they're just a consequence of having the larger window.
Saverio already replied to this objection. I would like to add a
further consideration. The aggressive probing phase has also the
negative effect of causing inflated values of RTT due to the excessive
queuing (see the RTT time evolution in the Cwnd/RTT figures).
>
> > i am pretty sure that this would happen with any algos -including
> > h-tcp- that makes the probing more aggressive leaving the van
> > jacobson linear phase.
>
> Probably, but you seem to completely lack the analysis to find out
> why the rtos did actually happen, whether it was due to most of the
> window lost or perhaps spurious rtos?
Why are you suggesting spurious rtos? To my uderstanding the spurious
rto should be mostly due to the link layer retransmissions that
are orthogonal to the congestion control algorithm employed.
Let's say the average number of spurious timeouts is X, independent
from the algo, the remaining number of timeouts should be caused by
congestion, that IMHO is what differentiates the couple
NewReno/Westwood+ from the Bic/Cubic one.
However the high number of timeouts caused by Bic (and other TCP
variants) has been already observed in [2] in a different scenario.
Best regards,
Luca
--
Refs.
[1] http://www.linuxfoundation.org/en/Net:TCP_testing
[2] Saverio Mascolo, Francesco Vacirca, The effect of reverse traffic
on TCP congestion control algorithms Protocols for Fast Long-distance
Networks, Nara, Japan, Feb. 2006
(Available at http://c3lab.poliba.it/images/2/27/Pfldnet06.pdf)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists