[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f85bfa97-ab9c-2d51-2053-1fe6bb3d45bc@redhat.com>
Date: Tue, 21 Aug 2018 08:33:00 +0800
From: Jason Wang <jasowang@...hat.com>
To: xiangxia.m.yue@...il.com, mst@...hat.com,
makita.toshiaki@....ntt.co.jp
Cc: virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org
Subject: Re: [PATCH net-next v8 7/7] net: vhost: make busyloop_intr more
accurate
On 2018年08月19日 20:11, xiangxia.m.yue@...il.com wrote:
> From: Tonghao Zhang <xiangxia.m.yue@...il.com>
>
> The patch uses vhost_has_work_pending() to check if
> the specified handler is scheduled, because in the most case,
> vhost_has_work() return true when other side handler is added
> to worker list. Use the vhost_has_work_pending() insead of
> vhost_has_work().
>
> Topology:
> [Host] ->linux bridge -> tap vhost-net ->[Guest]
>
> TCP_STREAM (netperf):
> * Without the patch: 38035.39 Mbps, 3.37 us mean latency
> * With the patch: 38409.44 Mbps, 3.34 us mean latency
The improvement is not obvious as last version. Do you imply there's
some recent changes of vhost that make it faster?
Thanks
>
> Signed-off-by: Tonghao Zhang <xiangxia.m.yue@...il.com>
> ---
> drivers/vhost/net.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index db63ae2..b6939ef 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -487,10 +487,8 @@ static void vhost_net_busy_poll(struct vhost_net *net,
> endtime = busy_clock() + busyloop_timeout;
>
> while (vhost_can_busy_poll(endtime)) {
> - if (vhost_has_work(&net->dev)) {
> - *busyloop_intr = true;
> + if (vhost_has_work(&net->dev))
> break;
> - }
>
> if ((sock_has_rx_data(sock) &&
> !vhost_vq_avail_empty(&net->dev, rvq)) ||
> @@ -513,6 +511,11 @@ static void vhost_net_busy_poll(struct vhost_net *net,
> !vhost_has_work_pending(&net->dev, VHOST_NET_VQ_RX))
> vhost_net_enable_vq(net, rvq);
>
> + if (vhost_has_work_pending(&net->dev,
> + poll_rx ?
> + VHOST_NET_VQ_RX: VHOST_NET_VQ_TX))
> + *busyloop_intr = true;
> +
> mutex_unlock(&vq->mutex);
> }
>
Powered by blists - more mailing lists