[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05be01c8b71b$cbb0c9e0$f903a33a@SABINE>
Date: Fri, 16 May 2008 15:42:18 +0930
From: "Kingsley Foreman" <kingsley@...ernode.com.au>
To: "Jarek Poplawski" <jarkao2@...il.com>
Cc: "Patrick McHardy" <kaber@...sh.net>,
"Eric Dumazet" <dada1@...mosbay.com>,
"Andrew Morton" <akpm@...ux-foundation.org>,
<linux-kernel@...r.kernel.org>, <netdev@...r.kernel.org>
Subject: Re: NET_SCHED cbq dropping too many packets on a bonding interface
ok after some playing a bit if i use
tc qdisc change dev bond0 parent 1: pfifo limit 30
the dropped packets go away, im not sure if that is considered normal or
not, however any number under 30 gives me issues.
Kingsley
----- Original Message -----
From: "Jarek Poplawski" <jarkao2@...il.com>
To: "Kingsley Foreman" <kingsley@...ernode.com.au>
Cc: "Patrick McHardy" <kaber@...sh.net>; "Eric Dumazet"
<dada1@...mosbay.com>; "Andrew Morton" <akpm@...ux-foundation.org>;
<linux-kernel@...r.kernel.org>; <netdev@...r.kernel.org>
Sent: Friday, May 16, 2008 3:19 PM
Subject: Re: NET_SCHED cbq dropping too many packets on a bonding interface
> On Fri, May 16, 2008 at 06:57:23AM +0930, Kingsley Foreman wrote:
> ...
>> running
>>
>> tc qdisc add dev bond0 root pfifo limit 1000
>>
>> or
>>
>> tc qdisc add dev bond0 root handle 1: cbq bandwidth 2000Mbit avpkt 1000
>> cell 0
>> tc qdisc add dev bond0 parent 1: pfifo limit 1000
>>
>>
>> doesn't appear to be dropping packets.
>>
>
> Great! So it looks like there is no error here unless there are needed
> significantly bigger queues to stop this dropping compared to 2.6.22.
> You could try to lower this limit now to something like 10 to find when
> drops start to appear. Why 2.6.22 doesn't need this at all is a mistery
> anyway (old scheduler?), and it would really need some work (like git
> bisection) to find the reason for more than 5 or 10 packet difference.
>
> Thanks,
> Jarek P.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists