[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1189000698.28083.22.camel@localhost.localdomain>
Date: Wed, 05 Sep 2007 15:58:18 +0200
From: Jesper Dangaard Brouer <jdb@...x.dk>
To: Patrick McHardy <kaber@...sh.net>
Cc: Jesper Dangaard Brouer <hawk@...u.dk>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Stephen Hemminger <shemminger@...ux-foundation.org>
Subject: Re: [PATCH 2/2]: [NET_SCHED]: Making rate table lookups
more flexible.
On Tue, 2007-09-04 at 18:25 +0200, Patrick McHardy wrote:
> Jesper Dangaard Brouer wrote:
> > On Sun, 2007-09-02 at 23:16 +0200, Patrick McHardy wrote:
> >
> >>Jesper Dangaard Brouer wrote:
> >>
> >>>On Sun, 2 Sep 2007, Patrick McHardy wrote:
> >>>
> >>>Lets focus on the general case, where the functionality actually is
> >>>needed right away.
> >>>
> >>>In the general case:
> >>>
> >>>- The rate table needs to be aligned (cell_align=-1).
> >>> (currently, we miscalculates up to 7 bytes on every lookup)
> >>
> >>We will always do that, thats a consequence of storing the
> >>transmission times for multiples of 8b.
> >
> >
> > The issue is that we use the lower boundary for calculating the transmit
> > cost. Thus, a 15 bytes packet only have a transmit cost of 8 bytes.
>
> I believe this is something that should be fixed anyway,
> its better to overestimate than underestimate to stay
> in control of the queue.
Well, I have attached a patch that uses the upper boundry instead.
The patch uses the cell_align feature.
The patch is very simple it self, but figure out what happens the rtab
array requires a little illustration:
Illustrating the rate table array:
Legend description
rtab[x] : Array index x of rtab[x]
xmit_sz : Transmit size contained in rtab[x] (normal transmit time)
maps[a-b] : Packet sizes from a to b, will map into rtab[x]
Current/old rate table mapping (cell_log:3):
rtab[0]:=xmit_sz:0 maps[0-7]
rtab[1]:=xmit_sz:8 maps[8-15]
rtab[2]:=xmit_sz:16 maps[16-23]
rtab[3]:=xmit_sz:24 maps[24-31]
rtab[4]:=xmit_sz:32 maps[32-39]
rtab[5]:=xmit_sz:40 maps[40-47]
rtab[6]:=xmit_sz:48 maps[48-55]
New rate table mapping, with kernel cell_align support.
rtab[0]:=xmit_sz:8 maps[0-8]
rtab[1]:=xmit_sz:16 maps[9-16]
rtab[2]:=xmit_sz:24 maps[17-24]
rtab[3]:=xmit_sz:32 maps[25-32]
rtab[4]:=xmit_sz:40 maps[33-40]
rtab[5]:=xmit_sz:48 maps[41-48]
rtab[6]:=xmit_sz:56 maps[49-56]
New TC util on a kernel WITHOUT support for cell_align
rtab[0]:=xmit_sz:8 maps[0-7]
rtab[1]:=xmit_sz:16 maps[8-15]
rtab[2]:=xmit_sz:24 maps[16-23]
rtab[3]:=xmit_sz:32 maps[24-31]
rtab[4]:=xmit_sz:40 maps[32-39]
rtab[5]:=xmit_sz:48 maps[40-47]
rtab[6]:=xmit_sz:56 maps[48-55]
Notice that without the kernel cell_align feature, we are only off by
one byte. That should be acceptable, when somebody uses a new TC util
on a old kernel.
> We could additionally make the
> rate tables more finegrained (optionally).
That is actually already possible with the approach used to handle
overflow of the rate table ("TSO" large packet support). By setting
cell_log=0, and letting the overflow code handle the rest, we get a very
fingrained lookup.
> >>>- The existing tc overhead calc can be made more accurate.
> >>> (by adding overhead before doing the lookup, instead of the
> >>> current solution where the rate table is modified with its
> >>> limited resolution)
> >>
> >>Please demonstrate this with patches (one for the overhead
> >>calculation, one for the cell_align thing), then we can
> >>continue this discussion.
> >
> >
> > I have attached a patch for the overhead calculation.
Attached is a patch that uses "the cell_align thing".
> Thanks, I probably won't get to looking into this until
> after the netfilter workshop next week.
Okay, but I'll see you at the workshop, so I might bug you there ;-)
--
Med venlig hilsen / Best regards
Jesper Brouer
ComX Networks A/S
Linux Network developer
Cand. Scient Datalog / MSc.
Author of http://adsl-optimizer.dk
View attachment "upperbound_rate_table_aligned.patch" of type "text/x-patch" (2156 bytes)
View attachment "cleanup_tc_calc_rtable_git.patch" of type "text/x-patch" (6028 bytes)
Powered by blists - more mailing lists