lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170110045213-mutt-send-email-mst@kernel.org>
Date:   Tue, 10 Jan 2017 04:57:39 +0200
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
        kvm@...r.kernel.org, stephen@...workplumber.org, wexu@...hat.com,
        stefanha@...hat.com
Subject: Re: [PATCH V4 net-next 1/3] vhost: better detection of available
 buffers

On Tue, Jan 10, 2017 at 10:22:42AM +0800, Jason Wang wrote:
> 
> 
> On 2017年01月10日 07:10, Michael S. Tsirkin wrote:
> > On Mon, Jan 09, 2017 at 10:59:16AM +0800, Jason Wang wrote:
> > > 
> > > On 2017年01月07日 03:55, Michael S. Tsirkin wrote:
> > > > On Fri, Jan 06, 2017 at 10:13:15AM +0800, Jason Wang wrote:
> > > > > This patch tries to do several tweaks on vhost_vq_avail_empty() for a
> > > > > better performance:
> > > > > 
> > > > > - check cached avail index first which could avoid userspace memory access.
> > > > > - using unlikely() for the failure of userspace access
> > > > > - check vq->last_avail_idx instead of cached avail index as the last
> > > > >     step.
> > > > > 
> > > > > This patch is need for batching supports which needs to peek whether
> > > > > or not there's still available buffers in the ring.
> > > > > 
> > > > > Reviewed-by: Stefan Hajnoczi <stefanha@...hat.com>
> > > > > Signed-off-by: Jason Wang <jasowang@...hat.com>
> > > > > ---
> > > > >    drivers/vhost/vhost.c | 8 ++++++--
> > > > >    1 file changed, 6 insertions(+), 2 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > > > index d643260..9f11838 100644
> > > > > --- a/drivers/vhost/vhost.c
> > > > > +++ b/drivers/vhost/vhost.c
> > > > > @@ -2241,11 +2241,15 @@ bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq)
> > > > >    	__virtio16 avail_idx;
> > > > >    	int r;
> > > > > +	if (vq->avail_idx != vq->last_avail_idx)
> > > > > +		return false;
> > > > > +
> > > > >    	r = vhost_get_user(vq, avail_idx, &vq->avail->idx);
> > > > > -	if (r)
> > > > > +	if (unlikely(r))
> > > > >    		return false;
> > > > > +	vq->avail_idx = vhost16_to_cpu(vq, avail_idx);
> > > > > -	return vhost16_to_cpu(vq, avail_idx) == vq->avail_idx;
> > > > > +	return vq->avail_idx == vq->last_avail_idx;
> > > > >    }
> > > > >    EXPORT_SYMBOL_GPL(vhost_vq_avail_empty);
> > > > So again, this did not address the issue I pointed out in v1:
> > > > if we have 1 buffer in RX queue and
> > > > that is not enough to store the whole packet,
> > > > vhost_vq_avail_empty returns false, then we re-read
> > > > the descriptors again and again.
> > > > 
> > > > You have saved a single index access but not the more expensive
> > > > descriptor access.
> > > Looks not, if I understand the code correctly, in this case, get_rx_bufs()
> > > will return zero, and we will try to enable rx kick and exit the loop.
> > > 
> > > Thanks
> > I mean this:
> > 
> >                  while (vhost_can_busy_poll(vq->dev, endtime) &&
> >                         vhost_vq_avail_empty(vq->dev, vq))
> >                          cpu_relax();
> >                  preempt_enable();
> >                  r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
> >                                        out_num, in_num, NULL, NULL);
> > 
> > 
> > vhost_vq_avail_empty returns false so we break out of the loop
> > and call vhost_get_vq_desc.
> > 
> > 
> 
> But this is the code for polling tx vq not rx I think?
> 
> Thanks

Oh, right.
I'll re-read this.


-- 
MST

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ