[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140901.152430.1711925724234542172.davem@davemloft.net>
Date: Mon, 01 Sep 2014 15:24:30 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: netdev@...r.kernel.org
Subject: [PATCH 0/9] Make dev_hard_start_xmit() work fundamentall on lists
After this patch set, dev_hard_start_xmit() will work fundemantally
on any and all SKB lists.
This opens the path for a clean implementation of pulling multiple
packets out during qdisc_restart(), and then passing that blob
in one shot to dev_hard_start_xmit().
There were two main architectural blockers to this:
1) The GSO handling, we kept the original GSO head SKB around
simply because dev_hard_start_xmit() had no way to communicate
to the caller how far into the segmented list it was able to
go. Now it can, so the head GSO can be liberated immediately.
All of the special GSO head SKB destructor et al. handling goes
away too.
2) Validate of VLAN, CSUM, and segmentation characteristics was being
performed inside of dev_hard_start_xmit(). If want to truly batch,
we have to let the higher levels to this. In particular, this is
now dequeue_skb()'s job.
And with those two issues out of the way, it should now be trivial to
build experiments on top of this patch set, all of the framework
should be there now. You could do something as simple as:
skb = q->dequeue(q);
if (skb)
skb = validate_xmit_skb(skb, qdisc_dev(q));
if (skb) {
struct sk_buff *new, *head = skb;
int limit = 5;
do {
new = q->dequeue(q);
if (new)
new = validate_xmit_skb(new, qdisc_dev(q));
if (new) {
skb->next = new;
skb = new;
}
} while (new && --limit);
skb = head;
}
inside of the else branch of dequeue_skb().
Signed-off-by: David S. Miller <davem@...emloft.net>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists