[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <136ea81c65e847719c787c37679f8da3@AMSPEX02CL03.citrite.net>
Date: Tue, 4 Oct 2016 13:56:15 +0000
From: Paul Durrant <Paul.Durrant@...rix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
Wei Liu <wei.liu2@...rix.com>,
David Vrabel <david.vrabel@...rix.com>
Subject: RE: [Xen-devel] [PATCH v2 net-next 4/7] xen-netback: immediately wake
tx queue when guest rx queue has space
> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@...cle.com]
> Sent: 04 October 2016 13:49
> To: Paul Durrant <Paul.Durrant@...rix.com>
> Cc: netdev@...r.kernel.org; xen-devel@...ts.xenproject.org; Wei Liu
> <wei.liu2@...rix.com>; David Vrabel <david.vrabel@...rix.com>
> Subject: Re: [Xen-devel] [PATCH v2 net-next 4/7] xen-netback: immediately
> wake tx queue when guest rx queue has space
>
> On Tue, Oct 04, 2016 at 02:29:15AM -0700, Paul Durrant wrote:
> > From: David Vrabel <david.vrabel@...rix.com>
> >
> > When an skb is removed from the guest rx queue, immediately wake the
> > tx queue, instead of after processing them.
>
> Please, could the description explain why?
>
Is it not reasonably obvious that it improves parallelism between filling and draining the queue? I could add a comment if you think it needs spelling out.
Paul
> >
> > Signed-off-by: David Vrabel <david.vrabel@...rix.com> [re-based]
> > Signed-off-by: Paul Durrant <paul.durrant@...rix.com>
> > ---
> > Cc: Wei Liu <wei.liu2@...rix.com>
> > ---
> > drivers/net/xen-netback/rx.c | 24 ++++++++----------------
> > 1 file changed, 8 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/net/xen-netback/rx.c
> > b/drivers/net/xen-netback/rx.c index b0ce4c6..9548709 100644
> > --- a/drivers/net/xen-netback/rx.c
> > +++ b/drivers/net/xen-netback/rx.c
> > @@ -92,27 +92,21 @@ static struct sk_buff *xenvif_rx_dequeue(struct
> xenvif_queue *queue)
> > spin_lock_irq(&queue->rx_queue.lock);
> >
> > skb = __skb_dequeue(&queue->rx_queue);
> > - if (skb)
> > + if (skb) {
> > queue->rx_queue_len -= skb->len;
> > + if (queue->rx_queue_len < queue->rx_queue_max) {
> > + struct netdev_queue *txq;
> > +
> > + txq = netdev_get_tx_queue(queue->vif->dev,
> queue->id);
> > + netif_tx_wake_queue(txq);
> > + }
> > + }
> >
> > spin_unlock_irq(&queue->rx_queue.lock);
> >
> > return skb;
> > }
> >
> > -static void xenvif_rx_queue_maybe_wake(struct xenvif_queue *queue) -
> {
> > - spin_lock_irq(&queue->rx_queue.lock);
> > -
> > - if (queue->rx_queue_len < queue->rx_queue_max) {
> > - struct net_device *dev = queue->vif->dev;
> > -
> > - netif_tx_wake_queue(netdev_get_tx_queue(dev, queue-
> >id));
> > - }
> > -
> > - spin_unlock_irq(&queue->rx_queue.lock);
> > -}
> > -
> > static void xenvif_rx_queue_purge(struct xenvif_queue *queue) {
> > struct sk_buff *skb;
> > @@ -585,8 +579,6 @@ int xenvif_kthread_guest_rx(void *data)
> > */
> > xenvif_rx_queue_drop_expired(queue);
> >
> > - xenvif_rx_queue_maybe_wake(queue);
> > -
> > cond_resched();
> > }
> >
> > --
> > 2.1.4
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@...ts.xen.org
> > https://lists.xen.org/xen-devel
Powered by blists - more mailing lists