[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e57e2cde71a4270a8fef395c31f5f0e8a130c28.1351524502.git.mst@redhat.com>
Date: Mon, 29 Oct 2012 17:49:51 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: unlisted-recipients:; (no To-header on input)
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Ian Campbell <Ian.Campbell@...rix.com>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH net-next 8/8] vhost-net: reduce vq polling on tx zerocopy
It seems that to avoid deadlocks it is enough to poll vq before
we are going to use the last buffer. This should be faster than
c70aa540c7a9f67add11ad3161096fb95233aa2e.
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
---
drivers/vhost/net.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8e9de79..3967f82 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -197,8 +197,16 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, int status)
{
struct vhost_ubuf_ref *ubufs = ubuf->ctx;
struct vhost_virtqueue *vq = ubufs->vq;
-
- vhost_poll_queue(&vq->poll);
+ int cnt = atomic_read(&ubufs->kref.refcount);
+
+ /*
+ * Trigger polling thread if guest stopped submitting new buffers:
+ * in this case, the refcount after decrement will eventually reach 1
+ * so here it is 2.
+ * We also trigger polling periodically after each 16 packets.
+ */
+ if (cnt <= 2 || !(cnt % 16))
+ vhost_poll_queue(&vq->poll);
/* set len to mark this desc buffers done DMA */
vq->heads[ubuf->desc].len = status ?
VHOST_DMA_FAILED_LEN : VHOST_DMA_DONE_LEN;
--
MST
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists