[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF1ivSYweQgxbCd_ejHmDi5w7puRkE7MbpV_hczhXXDca5DJ7A@mail.gmail.com>
Date: Tue, 18 Sep 2012 17:56:57 +0800
From: Lin Ming <mlin@...pku.edu.cn>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: networking <netdev@...r.kernel.org>
Subject: Re: HTB vs CoDel performance
On Tue, Sep 18, 2012 at 5:45 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Tue, 2012-09-18 at 17:28 +0800, Lin Ming wrote:
>> Hi,
>>
>> I'm testing htb performance on a gigabit router running 2.6.32 kernel.
>> Packet path: PC1 ---> Router LAN port ---> Router WAN port ---> PC2
>>
>> pfifo_fast: 920Mbps
>> htb: 750Mbps, ~20% drops compared to pfifo_fast
>>
>> htb tc commands as below,
>> # tc qdisc add dev eth10 root handle 20: htb default 1
>> # tc class add dev eth10 parent 20 classid 20:1 htb prio 2 rate
>> 1024Mbit ceil 1024Mbit burst 1281408b cburst 1281408b
>>
>> The performance drop seems caused by the complex htb enqueue/dequeue algorithm.
>>
>> I had a quick look at CoDel code, seems it does not have so complex
>> data structure as HTB.
>> I'm going to backport CoDel. Is this a good choice?
>> Can I gain similar performance as pfifo_fast?
>
> codel is quite different than HTB : It has no rate control, so its very
> fast. (But it has no prio differentiation as pfifo_fast with its 3
> bands)
>
> So what are your exact needs ?
I need traffic priority/traffic shaping/rate control ... actually all
QoS features on the router.
And if I just set the rate to gigabit(no other settings), for example,
# tc qdisc add dev eth10 root handle 20: htb default 1
# tc class add dev eth10 parent 20 classid 20:1 htb prio 2 rate
1024Mbit ceil 1024Mbit burst 1281408b cburst 1281408b
it should gain similar performance as pfifo_fast.
codel has no rate control. So seems I have to find way to optimize htb?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists