[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1179353523.10414.12.camel@w-sridhar2.beaverton.ibm.com>
Date: Wed, 16 May 2007 15:12:03 -0700
From: Sridhar Samudrala <sri@...ibm.com>
To: hadi@...erus.ca
Cc: David Miller <davem@...emloft.net>, xma@...ibm.com,
rdreier@...co.com, ak@...e.de, krkumar2@...ibm.com,
netdev@...r.kernel.org, netdev-owner@...r.kernel.org,
ashwin.chaugule@...unite.com,
Evgeniy Polyakov <johnpol@....mipt.ru>,
Gagan Arneja <gagan@...are.com>
Subject: Re: [WIP] [PATCH] WAS Re: [RFC] New driver API to speed up small
packets xmits
Jamal,
Here are some comments i have on your patch.
See them inline.
Thanks
Sridhar
+static int try_get_tx_pkts(struct net_device *dev, struct Qdisc *q, int count)
+{
+ struct sk_buff *skb;
+ struct sk_buff_head *skbs = &dev->blist;
+ int tdq = count;
+
+ /*
+ * very unlikely, but who knows ..
+ * If this happens we dont try to grab more pkts
+ */
+ if (!skb_queue_empty(&dev->blist))
+ return skb_queue_len(&dev->blist);
+
+ if (dev->gso_skb) {
+ count--;
+ __skb_queue_head(skbs, dev->gso_skb);
+ dev->gso_skb = NULL;
+ }
AFAIK, gso_skb can be a list of skb's. Can we add a list
to another list using __skb_queue_head()?
Also, if gso_skb is a list of multiple skb's, i think the
count needs to be decremented by the number of segments in
gso_skb.
+
+ while (count) {
+ skb = q->dequeue(q);
+ if (!skb)
+ break;
+ count--;
+ __skb_queue_head(skbs, skb);
+ }
+
+ return tdq - count;
+}
+
+static inline int try_tx_pkts(struct net_device *dev)
+{
+
+ return dev->hard_batch_xmit(&dev->blist, dev);
+
+}
+
+/* same comments as in qdisc_restart apply;
+ * at some point use shared code with qdisc_restart*/
+int batch_qdisc_restart(struct net_device *dev)
+{
+ struct Qdisc *q = dev->qdisc;
+ unsigned lockless = (dev->features & NETIF_F_LLTX);
+ int count = dev->xmit_win;
+ int ret = 0;
+
+ ret = try_get_tx_pkts(dev, q, count);
+
+ if (ret == 0)
+ return qdisc_qlen(q);
+
+ /* we have packets to send! */
+ if (!lockless) {
+ if (!netif_tx_trylock(dev))
+ return tx_islocked(NULL, dev, q);
+ }
+
+ /* all clear .. */
+ spin_unlock(&dev->queue_lock);
+
+ ret = NETDEV_TX_BUSY;
+ if (!netif_queue_stopped(dev))
+ ret = try_tx_pkts(dev);
try_tx_pkts() is directly calling the device's batch xmit routine.
Don't we need to call dev_hard_start_xmit() to handle dev_queue_xmit_nit
and GSO segmentation?
+
+ if (!lockless)
+ netif_tx_unlock(dev);
+
+ spin_lock(&dev->queue_lock);
+
+ q = dev->qdisc;
+
+ /* most likely result, packet went ok */
+ if (ret == NETDEV_TX_OK)
+ return qdisc_qlen(q);
+ /* only for lockless drivers .. */
+ if (ret == NETDEV_TX_LOCKED && lockless)
+ return tx_islocked(NULL, dev, q);
+
+ if (unlikely(ret != NETDEV_TX_BUSY && net_ratelimit()))
+ printk(KERN_WARNING " BUG %s code %d qlen %d\n",
+ dev->name, ret, q->q.qlen);
+
+ return do_dev_requeue(NULL, dev, q);
+}
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists