[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OF4408981E.ACFC27B2-ON6525768F.0042EA92-6525768F.00436345@in.ibm.com>
Date: Thu, 17 Dec 2009 17:57:15 +0530
From: Krishna Kumar2 <krkumar2@...ibm.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: "David S. Miller" <davem@...emloft.net>,
Jarek Poplawski <jarkao2@...il.com>, mst@...hat.com,
netdev@...r.kernel.org, Rusty Russell <rusty@...tcorp.com.au>,
Sridhar Samudrala <sri@...ibm.com>
Subject: Re: [RFC PATCH] Regression in linux 2.6.32 virtio_net seen with vhost-net
Herbert Xu <herbert@...dor.apana.org.au> wrote on 12/17/2009 05:38:56 PM:
> > And indeed, this new requeue behaviour was introduced in 2.6.32.
>
> Actually no, it was merely rearranged. It would appear that this
> behaviour has been around for over a year.
>
> It is even worse than I thought. Not only would new tx packets
> trigger this unnecessary queue run, but once it is triggered, it
> would consume 100% CPU as dev_requeue_skb unconditionally reschedules
> the queue!
>
> Tell me this is not true please...
Hi Herbert,
I am confused. Isn't dequeue_skb returning NULL for 2nd - nth skbs
till the queue is restarted, so how is it broken?
__dev_xmit_skb calls sch_direct_xmit only once after the device is
stopped, and this call results in skb being re-queued. Next call to
__dev_xmit_skb calls qdisc_restart which will always bail out since
dev_dequeue returns NULL. We also avoid rescheduling the queue after
the first resched. I also think the resched in requeue is probably
not required, since the device will call netif_tx_wake_queue anyway.
dev_dequeue()
{
if (unlikely(skb)) {
struct net_device *dev = qdisc_dev(q);
struct netdev_queue *txq;
/* check the reason of requeuing without tx lock first */
txq = netdev_get_tx_queue(dev, skb_get_queue_mapping(skb));
if (!netif_tx_queue_stopped(txq) &&
!netif_tx_queue_frozen(txq)) {
q->gso_skb = NULL;
q->q.qlen--;
} else
skb = NULL;
}
Thanks,
- KK
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists