[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1530340438-3039-5-git-send-email-xiangxia.m.yue@gmail.com>
Date: Fri, 29 Jun 2018 23:33:58 -0700
From: xiangxia.m.yue@...il.com
To: jasowang@...hat.com
Cc: mst@...hat.com, makita.toshiaki@....ntt.co.jp,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
Tonghao Zhang <xiangxia.m.yue@...il.com>,
Tonghao Zhang <zhangtonghao@...ichuxing.com>
Subject: [PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
From: Tonghao Zhang <xiangxia.m.yue@...il.com>
This patch improves the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the
same time. handle_rx do that in the same way.
We set the poll-us=100us and use the iperf3 to test
its bandwidth, use the netperf to test throughput and mean
latency. When running the tests, the vhost-net kthread of
that VM, is alway 100% CPU. The commands are shown as below.
iperf3 -s -D
iperf3 -c IP -i 1 -P 1 -t 20 -M 1400
or
netserver
netperf -H IP -t TCP_RR -l 20 -- -O "THROUGHPUT,MEAN_LATENCY"
host -> guest:
iperf3:
* With the patch: 27.0 Gbits/sec
* Without the patch: 14.4 Gbits/sec
netperf (TCP_RR):
* With the patch: 48039.56 trans/s, 20.64us mean latency
* Without the patch: 46027.07 trans/s, 21.58us mean latency
This patch also improves the guest transmit performance.
guest -> host:
iperf3:
* With the patch: 27.2 Gbits/sec
* Without the patch: 24.4 Gbits/sec
netperf (TCP_RR):
* With the patch: 47963.25 trans/s, 20.71us mean latency
* Without the patch: 45796.70 trans/s, 21.68us mean latency
Signed-off-by: Tonghao Zhang <zhangtonghao@...ichuxing.com>
---
drivers/vhost/net.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 458f81d..fb43d82 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -478,17 +478,13 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
struct iovec iov[], unsigned int iov_size,
unsigned int *out_num, unsigned int *in_num)
{
- unsigned long uninitialized_var(endtime);
+ struct vhost_net_virtqueue *nvq_rx = &net->vqs[VHOST_NET_VQ_RX];
int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
out_num, in_num, NULL, NULL);
if (r == vq->num && vq->busyloop_timeout) {
- preempt_disable();
- endtime = busy_clock() + vq->busyloop_timeout;
- while (vhost_can_busy_poll(vq->dev, endtime) &&
- vhost_vq_avail_empty(vq->dev, vq))
- cpu_relax();
- preempt_enable();
+ vhost_net_busy_poll(net, &nvq_rx->vq, vq, false);
+
r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
out_num, in_num, NULL, NULL);
}
--
1.8.3.1
Powered by blists - more mailing lists