lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090517201528.GA8552@ami.dom.local>
Date:	Sun, 17 May 2009 22:15:28 +0200
From:	Jarek Poplawski <jarkao2@...il.com>
To:	Antonio Almeida <vexwek@...il.com>
Cc:	netdev@...r.kernel.org, kaber@...sh.net, davem@...emloft.net,
	devik@....cz
Subject: Re: HTB accuracy for high speed

On Fri, May 15, 2009 at 03:49:31PM +0100, Antonio Almeida wrote:
> Hi!
> I've been using HTB in a Linux bridge and recently I noticed that, for
> high speed, the configured rate/ceil is not respected as for lower
> speeds.
> I'm using a packet generator/analyser to inject over 950Mpbs, and see
> what returns back to it, in the other side of my bridge. Generated
> packets have 800bytes. I noticed that, for several tc HTB rate/ceil
> configurations the amount of traffic received by the analyser stays
> the same. See this values:
> 
> HTB conf      Analyser reception
> 476000Kbit    544.260.329
> 500000Kbit    545.880.017
> 510000Kbit    544.489.469
> 512000Kbit    546.890.972
> -------------------------
> 513000Kbit    596.061.383
> 520000Kbit    596.791.866
> 550000Kbit    596.543.271
> 554000Kbit    596.193.545
> -------------------------
> 555000Kbit    654.773.221
> 570000Kbit    654.996.381
> 590000Kbit    655.363.253
> 605000Kbit    654.112.017
> -------------------------
> 606000Kbit    728.262.237
> 665000Kbit    727.014.365
> -------------------------
> 
> There are these steps and it looks like doesn't matter if I configure
> HTB to 555Mbit or to 605Mbit - the result is the same: 654Mbit. This
> is 18% more traffic than the configured value. I also realise that for
> smaller packets it gets worse, reaching 30% more traffic than what I
> configured. For packets of 1514bytes the accuracy is quiet good.
> I'm using kernel 2.6.25
> 
> My 'tc -s -d class ls dev eth1' output:
> 
> class htb 1:10 parent 1:2 rate 1000Mbit ceil 1000Mbit burst 126375b/8
> mpu 0b overhead 0b cburst 126375b/8 mpu 0b overhead 0b level 5
>  Sent 51888579644 bytes 62067679 pkt (dropped 0, overlimits 0 requeues 0)
>  rate 653124Kbit 97656pps backlog 0b 0p requeues 0
>  lended: 0 borrowed: 0 giants: 0
>  tokens: 113 ctokens: 113
> 
> class htb 1:1 root rate 1000Mbit ceil 1000Mbit burst 126375b/8 mpu 0b
> overhead 0b cburst 126375b/8 mpu 0b overhead 0b level 7
>  Sent 51888579644 bytes 62067679 pkt (dropped 0, overlimits 0 requeues 0)
>  rate 653123Kbit 97656pps backlog 0b 0p requeues 0
>  lended: 0 borrowed: 0 giants: 0
>  tokens: 113 ctokens: 113
> 
> class htb 1:2 parent 1:1 rate 1000Mbit ceil 1000Mbit burst 126375b/8
> mpu 0b overhead 0b cburst 126375b/8 mpu 0b overhead 0b level 6
>  Sent 51888579644 bytes 62067679 pkt (dropped 0, overlimits 0 requeues 0)
>  rate 653124Kbit 97656pps backlog 0b 0p requeues 0
>  lended: 0 borrowed: 0 giants: 0
>  tokens: 113 ctokens: 113
> 
> class htb 1:108 parent 1:10 leaf 108: prio 7 quantum 1514 rate
> 555000Kbit ceil 555000Kbit burst 70901b/8 mpu 0b overhead 0b cburst
> 70901b/8 mpu 0b overhead 0b level 0
>  Sent 51888579644 bytes 62067679 pkt (dropped 27801917, overlimits 0 requeues 0)
>  rate 653124Kbit 97656pps backlog 0b 0p requeues 0
>  lended: 62067679 borrowed: 0 giants: 0
>  tokens: -798 ctokens: -798
> 
> As you can see, class htb 1:108 rate's is 653124Kbit! Much bigger that
> it's ceil.
> 

Here is some additional explanation. It looks like these rates above
500Mbit hit the design limits of packet scheduling. Currently used
internal resolution PSCHED_TICKS_PER_SEC is 1,000,000. 550Mbit rate
with 800byte packets means 550M/8/800 = 85938 packets/s, so on average
1000000/85938 = 11.6 ticks per packet. Accounting only 11 ticks means
we leave 0.6*85938 = 51563 ticks per second, letting for additional
sending of 51563/11 = 4687 packets/s or 4687*800*8 = 30Mbit. Of course
it could be worse (0.9 tick/packet lost) depending on packet sizes vs.
rates, and the effect rises for higher rates.

Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ