[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070322132021.M7074@visp.net.lb>
Date: Thu, 22 Mar 2007 15:26:12 +0200
From: "Denys" <denys@...p.net.lb>
To: Patrick McHardy <kaber@...sh.net>
Cc: netdev@...r.kernel.org,
Stephen Hemminger <shemminger@...ux-foundation.org>
Subject: Re: iproute2-2.6.20-070313 bug ?
Dear sir
Sorry, i forgot to CC other members of discussion.
1024kb (if i am not wrong 1Mbyte) is huge?
For me it is ok, as soon as i have RAM. Another thing, it is working well
with old tc. Just really if i have plenty of RAM's and i want 32second
buffer, why i cannot have that, and if i see it is really possible before?
Possible i am misunderstanding something...
In real world i am seeing reasonable to have much bigger buffers, especially
if there is no problem in resources (RAM, timer resolution, CPU). For
example, as i remember we had failure on one of our STM-1, and Cisco's on
Teleglobe was buffering about 20-30seconds of data without major packetloss.
Another thing, why i was using buffer, and possible i use it wrong:
For example customer have 128Kbit/s account, and i want to give him burst to
open web-pages fast (256Kbit/s), but if he use bandwidth non-stop, he will
pass this buffer, and will be throttled back to 128Kbit/s. Now seems i cannot
give such functionality.
On Thu, 22 Mar 2007 14:09:06 +0100, Patrick McHardy wrote
> Denys wrote:
> > /sbin/tc2 qdisc del dev ppp0 root
> > /sbin/tc2 qdisc add dev ppp0 root handle 1: prio
> > /sbin/tc2 qdisc add dev ppp0 parent 1:1 handle 2: tbf buffer 1024kb
latency
> > 500ms rate 128kbit peakrate 256kbit minburst 16384
> > /sbin/tc2 filter add dev ppp0 parent 1:0 protocol ip prio 10 u32 match ip
dst
> > 0.0.0.0/0 flowid 2:1
>
> That is an incredible huge buffer value.
>
> > qdisc tbf 2: parent 1:1 rate 128000bit burst 4294932937b peakrate
> 256000bit minburst 16Kb lat 4.2s
>
> And it causes an overflow.
>
> The limit for the TBF burst value with nanosecond resolution is
> ~ 4 * rate (10^9 * burst / rate < 2^32 needs to hold), resoluting
> in a worst-case latency of 4 seconds. I think this limit is in the
> reasonable range. Your configuration results in a worst-case
> queuing delay of 64s, and I doubt that you really want that.
>
> Obviously its not good to break existing configurations, but I
> would argue that this configuration is broken.
--
Virtual ISP S.A.L.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists