lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 31 Oct 2017 18:36:00 +0200
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
        Wei Xu <wexu@...hat.com>,
        Matthew Rosato <mjrosato@...ux.vnet.ibm.com>
Subject: Re: [PATCH net-next] vhost_net: conditionally enable tx polling

On Tue, Oct 31, 2017 at 06:27:20PM +0800, Jason Wang wrote:
> We always poll tx for socket, this is sub optimal since:
> 
> - we only want to be notified when sndbuf is available
> - this will slightly increase the waitqueue traversing time and more
>   important, vhost could not benefit from commit
>   commit 9e641bdcfa4e
>   ("net-tun: restructure tun_do_read for better sleep/wakeup efficiency")
>   even if we've stopped rx polling during handle_rx() since tx poll
>   were still left in the waitqueue.
> 
> Pktgen from a remote host to VM over mlx4 shows 5.5% improvements on
> rx PPS. (from 1.27Mpps to 1.34Mpps)
> 
> Cc: Wei Xu <wexu@...hat.com>
> Cc: Matthew Rosato <mjrosato@...ux.vnet.ibm.com>
> Signed-off-by: Jason Wang <jasowang@...hat.com>
> ---

Now that vhost_poll_stop happens on data path
a lot, I'd say
        if (poll->wqh)
there should be unlikely().


>  drivers/vhost/net.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 68677d9..286c3e4 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -471,6 +471,7 @@ static void handle_tx(struct vhost_net *net)
>  		goto out;
>  
>  	vhost_disable_notify(&net->dev, vq);
> +	vhost_net_disable_vq(net, vq);
>  
>  	hdr_size = nvq->vhost_hlen;
>  	zcopy = nvq->ubufs;
> @@ -556,6 +557,8 @@ static void handle_tx(struct vhost_net *net)
>  					% UIO_MAXIOV;
>  			}
>  			vhost_discard_vq_desc(vq, 1);
> +			if (err == -EAGAIN)
> +				vhost_net_enable_vq(net, vq);
>  			break;
>  		}
>  		if (err != len)

I would probably just enable it unconditionally here. Why not?


> @@ -1145,9 +1148,11 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
>  		r = vhost_vq_init_access(vq);
>  		if (r)
>  			goto err_used;
> -		r = vhost_net_enable_vq(n, vq);
> -		if (r)
> -			goto err_used;
> +		if (index == VHOST_NET_VQ_RX) {
> +			r = vhost_net_enable_vq(n, vq);
> +			if (r)
> +				goto err_used;
> +		}
>  
>  		oldubufs = nvq->ubufs;
>  		nvq->ubufs = ubufs;

This last chunk seems questionable. If queue has stuff in it
when we connect the backend, we'll miss a wakeup.
I suspect this can happen during migration.


> -- 
> 2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ