[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110309161013.GA7165@redhat.com>
Date: Wed, 9 Mar 2011 18:10:13 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Shirley Ma <mashirle@...ibm.com>
Cc: Rusty Russell <rusty@...tcorp.com.au>,
Krishna Kumar2 <krkumar2@...ibm.com>,
David Miller <davem@...emloft.net>, kvm@...r.kernel.org,
netdev@...r.kernel.org, steved@...ibm.com,
Tom Lendacky <tahm@...ux.vnet.ibm.com>
Subject: Re: Network performance with small packets - continued
On Wed, Mar 09, 2011 at 07:45:43AM -0800, Shirley Ma wrote:
> On Wed, 2011-03-09 at 09:15 +0200, Michael S. Tsirkin wrote:
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 82dba5a..ebe3337 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -514,11 +514,11 @@ static unsigned int free_old_xmit_skbs(struct
> > virtnet_info *vi)
> > struct sk_buff *skb;
> > unsigned int len, tot_sgs = 0;
> >
> > - while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
> > + if ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
> > pr_debug("Sent skb %p\n", skb);
> > vi->dev->stats.tx_bytes += skb->len;
> > vi->dev->stats.tx_packets++;
> > - tot_sgs += skb_vnet_hdr(skb)->num_sg;
> > + tot_sgs = 2+MAX_SKB_FRAGS;
> > dev_kfree_skb_any(skb);
> > }
> > return tot_sgs;
>
> Return value should be different based on indirect or direct buffers
> here?
Something like that. Or we can assume no indirect, worst-case.
But just for testing, I think it should work as an estimation.
> > @@ -576,9 +576,6 @@ static netdev_tx_t start_xmit(struct sk_buff *skb,
> > struct net_device *dev)
> > struct virtnet_info *vi = netdev_priv(dev);
> > int capacity;
> >
> > - /* Free up any pending old buffers before queueing new ones.
> > */
> > - free_old_xmit_skbs(vi);
> > -
> > /* Try to transmit */
> > capacity = xmit_skb(vi, skb);
> >
> > @@ -605,6 +602,10 @@ static netdev_tx_t start_xmit(struct sk_buff
> > *skb, struct net_device *dev)
> > skb_orphan(skb);
> > nf_reset(skb);
> >
> > + /* Free up any old buffers so we can queue new ones. */
> > + if (capacity < 2+MAX_SKB_FRAGS)
> > + capacity += free_old_xmit_skbs(vi);
> > +
> > /* Apparently nice girls don't return TX_BUSY; stop the queue
> > * before it gets out of hand. Naturally, this wastes
> > entries. */
> > if (capacity < 2+MAX_SKB_FRAGS) {
>
> I tried a similar patch before, it didn't help much on TCP stream
> performance. But I didn't try multiple stream TCP_RR.
>
> Shirley
There's a bug in myh patch by the way. Pls try the following
instead (still untested).
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 82dba5a..4477b9a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -514,11 +514,11 @@ static unsigned int free_old_xmit_skbs(struct virtnet_info *vi)
struct sk_buff *skb;
unsigned int len, tot_sgs = 0;
- while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
+ if ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
pr_debug("Sent skb %p\n", skb);
vi->dev->stats.tx_bytes += skb->len;
vi->dev->stats.tx_packets++;
- tot_sgs += skb_vnet_hdr(skb)->num_sg;
+ tot_sgs = 2+MAX_SKB_FRAGS;
dev_kfree_skb_any(skb);
}
return tot_sgs;
@@ -576,7 +576,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
struct virtnet_info *vi = netdev_priv(dev);
int capacity;
- /* Free up any pending old buffers before queueing new ones. */
+ /* Free up any old buffers so we can queue new ones. */
free_old_xmit_skbs(vi);
/* Try to transmit */
@@ -605,6 +605,10 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
skb_orphan(skb);
nf_reset(skb);
+ /* Free up any old buffers so we can queue new ones. */
+ if (capacity < 2+MAX_SKB_FRAGS)
+ capacity += free_old_xmit_skbs(vi);
+
/* Apparently nice girls don't return TX_BUSY; stop the queue
* before it gets out of hand. Naturally, this wastes entries. */
if (capacity < 2+MAX_SKB_FRAGS) {
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists