[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <298f5c050905190731xf7e5f10v940ea731ef472443@mail.gmail.com>
Date: Tue, 19 May 2009 15:31:30 +0100
From: Antonio Almeida <vexwek@...il.com>
To: Jarek Poplawski <jarkao2@...il.com>,
Denys Fedoryschenko <denys@...p.net.lb>
Cc: Stephen Hemminger <shemminger@...tta.com>, netdev@...r.kernel.org,
kaber@...sh.net, davem@...emloft.net, devik@....cz,
Eric Dumazet <dada1@...mosbay.com>
Subject: Re: [PATCH iproute2] Re: HTB accuracy for high speed
I tested it with BFIFO using limit 6875000. (Analyser keeps sending
950Mbits/s of 800 bytes tcp packets - lots of drops for sure)
Backlog is now huge but the throughout stays much higher than the
configured ceil.
# tc -s -d class ls dev eth1
class htb 1:10 parent 1:2 rate 900000Kbit ceil 900000Kbit burst
113962b/8 mpu 0b overhead 0b cburst 113962b/8 mpu 0b overhead 0b level
5
Sent 9542831672 bytes 11988482 pkt (dropped 0, overlimits 0 requeues 0)
rate 621765Kbit 97639pps backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: -186 ctokens: -186
class htb 1:1 root rate 900000Kbit ceil 900000Kbit burst 113962b/8 mpu
0b overhead 0b cburst 113962b/8 mpu 0b overhead 0b level 7
Sent 9542831672 bytes 11988482 pkt (dropped 0, overlimits 0 requeues 0)
rate 621765Kbit 97639pps backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: -186 ctokens: -186
class htb 1:2 parent 1:1 rate 900000Kbit ceil 900000Kbit burst
113962b/8 mpu 0b overhead 0b cburst 113962b/8 mpu 0b overhead 0b level
6
Sent 9542831672 bytes 11988482 pkt (dropped 0, overlimits 0 requeues 0)
rate 621765Kbit 97639pps backlog 0b 0p requeues 0
lended: 0 borrowed: 0 giants: 0
tokens: -186 ctokens: -186
class htb 1:108 parent 1:10 leaf 108: prio 7 quantum 1514 rate
555000Kbit ceil 555000Kbit burst 70901b/8 mpu 0b overhead 0b cburst
70901b/8 mpu 0b overhead 0b level 0
Sent 9549705928 bytes 11997118 pkt (dropped 6092846, overlimits 0 requeues 0)
rate 621764Kbit 97639pps backlog 0b 8636p requeues 0
lended: 11988482 borrowed: 0 giants: 0
tokens: -1008 ctokens: -1008
# tc -s -d qdisc ls dev eth1
qdisc htb 1: root r2q 10 default 0 direct_packets_stat 11955 ver 3.17
Sent 9608660872 bytes 12071182 pkt (dropped 6124502, overlimits
18190041 requeues 0)
rate 0bit 0pps backlog 0b 8636p requeues 0
qdisc bfifo 108: parent 1:108 limit 6875000b
Sent 9599144692 bytes 12059227 pkt (dropped 6124502, overlimits 0 requeues 0)
rate 0bit 0pps backlog 6874256b 8636p requeues 0
Antonio Almeida
On Tue, May 19, 2009 at 12:28 PM, Jarek Poplawski <jarkao2@...il.com> wrote:
> On Tue, May 19, 2009 at 02:21:28PM +0300, Denys Fedoryschenko wrote:
>> On Tuesday 19 May 2009 14:18:57 Jarek Poplawski wrote:
>> >
>> > Sure, if the queue is too short we could have a problem with reaching
>> > the expected rate; but here it's all backwards - it could actually
>> > "help" with the stats. ;-)
>> >
>> > Jarek P.
>> Well, i had real experience on HTB, when i set too short buffers on my QoS
>> qdiscs, the incoming rate jumped too high than overall. When i set larger
>> buffers (and by the way dropped sfq and use bfifo) - it is dropped. No idea
>> why, bug or specific things in protocols congestion control. Maybe worth to
>> try...
>>
>
> Very strange. Anyway, "overlimits 0" suggests HTB always got packets
> when it needed...
>
> Jarek P.
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists