[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTint2-Fg35T9SqWPm3nOaoc1d=ZEnQ@mail.gmail.com>
Date: Tue, 26 Apr 2011 23:04:06 +0200
From: Dominik Kaspar <dokaspar.ietf@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Carsten Wolff <carsten@...ffcarsten.de>, netdev@...r.kernel.org
Subject: Re: Linux TCP's Robustness to Multipath Packet Reordering
On Tue, Apr 26, 2011 at 10:43 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le lundi 25 avril 2011 à 16:35 +0200, Dominik Kaspar a écrit :
>
>> For the experiments, all default TCP options were used, meaning that
>> SACK, DSACK, Timestamps, were all enabled. Not sure how to turn on/off
>> TSO... so that is probably enabled, too. Path emulation is done with
>> tc/netem at the receiver interfaces (eth1, eth2) with this script:
>>
>> http://home.simula.no/~kaspar/static/netem.sh
>>
>
> What are the exact parameters ? (queue size for instance)
>
> It would be nice to give detailed stats after one run, on receiver
> (since you have netem on ingress side)
>
> tc -s -d qdisc
In these experiments, a queue size of 1000 packets was specified. I am
aware that this is typically referred to as "buffer bloat" and causes
the RTT and the cwnd to grow excessively. The smaller I configure the
queues, the more time it takes for TCP to "level up" to the aggregate
throughput. By keeping the queues so large, I hope to more quickly
identify the reason why TCP is actually able to adjust to the immense
multipath reordering. What parameters could be highly relevant, other
than the queue size?
Thanks for the tip about printing tc/netem statistics after each run,
I will use "tc -s -d qdisc" next time.
Greetings,
Dominik
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists