lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A2620FD.8030708@trash.net>
Date:	Wed, 03 Jun 2009 09:06:37 +0200
From:	Patrick McHardy <kaber@...sh.net>
To:	Jarek Poplawski <jarkao2@...il.com>
CC:	Antonio Almeida <vexwek@...il.com>,
	Stephen Hemminger <shemminger@...tta.com>,
	netdev@...r.kernel.org, davem@...emloft.net, devik@....cz,
	Eric Dumazet <dada1@...mosbay.com>,
	Vladimir Ivashchenko <hazard@...ncoudi.com>
Subject: Re: [PATCH iproute2] Re: HTB accuracy for high speed

Jarek Poplawski wrote:
> Jarek Poplawski wrote, On 06/02/2009 11:37 PM:
> ...
> 
>> I described the reasoning here:
>> http://permalink.gmane.org/gmane.linux.network/128189
> 
> The link is stuck now, so here is a quote:

Thanks.

> Jarek Poplawski wrote, On 05/17/2009 10:15 PM:
> 
>> Here is some additional explanation. It looks like these rates above
>> 500Mbit hit the design limits of packet scheduling. Currently used
>> internal resolution PSCHED_TICKS_PER_SEC is 1,000,000. 550Mbit rate
>> with 800byte packets means 550M/8/800 = 85938 packets/s, so on average
>> 1000000/85938 = 11.6 ticks per packet. Accounting only 11 ticks means
>> we leave 0.6*85938 = 51563 ticks per second, letting for additional
>> sending of 51563/11 = 4687 packets/s or 4687*800*8 = 30Mbit. Of course
>> it could be worse (0.9 tick/packet lost) depending on packet sizes vs.
>> rates, and the effect rises for higher rates.

I see. Unfortunately changing the scaling factors is pushing the lower
end towards overflowing. For example Denys Fedoryshchenko reported some
breakage a few years ago when I changed the iproute-internal factors
triggered by this command:

.. tbf buffer 1024kb latency 500ms rate 128kbit peakrate 256kbit 
minburst 16384

The burst size calculated by TBF with the current parameters is
64000000. Increasing it by a factor of 16 as in your patch results
in 1024000000. Which means we're getting dangerously close to
overflowing, a buffer size increase or a rate decrease of slightly
bigger than factor 4 will already overflow.

Mid-term we really need to move to 64 bit values and ns resolution,
otherwise this problem is just going to reappear as soon as someone
tries 10gbit. Not sure what the best short term fix is, I feel a bit
uneasy about changing the current factors given how close this brings
us towards overflowing.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ