[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070513.214236.130588832.taka@valinux.co.jp>
Date: Sun, 13 May 2007 21:42:36 +0900 (JST)
From: Hirokazu Takahashi <taka@...inux.co.jp>
To: netdev@...r.kernel.org
Cc: kaber@...sh.net, davem@...emloft.net, linux-net@...r.kernel.org
Subject: [PATCH] tbf scheduler: TSO support (updated)
Hi,
> I'm now thinking I can make it just hold a TSO packet until p->tokens
> reaches the size of the packet. I think it is straightforward
> implementation. I'll try this.
I re-implemented the patch, which is simpler than the previous one.
sch->dev->mtu is used to determine how many segments are included
in a TSO packet. The point is that the value of sch->dev->mtu has to
be treated possibly broken because it is set by administrators.
Thanks,
Hirokazu Takahashi.
Signed-off-by: Hirokazu Takahashi <taka@...inux.co.jp>
--- linux-2.6.21/net/sched/sch_tbf.c.ORG 2007-05-08 20:59:28.000000000 +0900
+++ linux-2.6.21/net/sched/sch_tbf.c 2007-05-13 14:22:39.000000000 +0900
@@ -9,7 +9,8 @@
* Authors: Alexey Kuznetsov, <kuznet@....inr.ac.ru>
* Dmitry Torokhov <dtor@...l.ru> - allow attaching inner qdiscs -
* original idea by Martin Devera
- *
+ * Fixes:
+ * Hirokazu Takahashi <taka@...inux.co.jp> : TSO support
*/
#include <linux/module.h>
@@ -139,7 +140,7 @@ static int tbf_enqueue(struct sk_buff *s
struct tbf_sched_data *q = qdisc_priv(sch);
int ret;
- if (skb->len > q->max_size) {
+ if (skb->len > q->max_size && !(sch->dev->features & NETIF_F_GSO_MASK)) {
sch->qstats.drops++;
#ifdef CONFIG_NET_CLS_POLICE
if (sch->reshape_fail == NULL || sch->reshape_fail(skb, sch))
@@ -205,6 +206,16 @@ static struct sk_buff *tbf_dequeue(struc
long toks, delay;
long ptoks = 0;
unsigned int len = skb->len;
+ /*
+ * Note: TSO packets will be larger than its actual mtu.
+ * These packets should be treated as packets including
+ * several ordinary ones. In this case, tokens should
+ * be held until it reaches the length of them.
+ */
+ long max_toks = max(len, q->buffer);
+ unsigned int rmtu = sch->dev->mtu? min(q->max_size, sch->dev->mtu) : q->max_size;
+ unsigned int segs = len/rmtu;
+ unsigned int rest = len - rmtu*segs;
PSCHED_GET_TIME(now);
@@ -212,14 +223,18 @@ static struct sk_buff *tbf_dequeue(struc
if (q->P_tab) {
ptoks = toks + q->ptokens;
- if (ptoks > (long)q->mtu)
- ptoks = q->mtu;
- ptoks -= L2T_P(q, len);
+ if (ptoks > (long)(q->mtu * (segs + !!rest)))
+ ptoks = q->mtu * (segs + !!rest);
+ ptoks -= L2T_P(q, rmtu) * segs;
+ if (rest)
+ ptoks -= L2T_P(q, rest);
}
toks += q->tokens;
- if (toks > (long)q->buffer)
- toks = q->buffer;
- toks -= L2T(q, len);
+ if (toks > max_toks)
+ toks = max_toks;
+ toks -= L2T(q, rmtu) * segs;
+ if (rest)
+ toks -= L2T(q, rest);
if ((toks|ptoks) >= 0) {
q->t_c = now;
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists