lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Nov 2015 15:28:19 +0100
From:	Greg Kurz <gkurz@...ux.vnet.ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH] vhost: move is_le setup to the backend

On Thu, 12 Nov 2015 15:46:30 +0200
"Michael S. Tsirkin" <mst@...hat.com> wrote:

> On Fri, Oct 30, 2015 at 12:42:35PM +0100, Greg Kurz wrote:
> > The vq->is_le field is used to fix endianness when accessing the vring via
> > the cpu_to_vhost16() and vhost16_to_cpu() helpers in the following cases:
> > 
> > 1) host is big endian and device is modern virtio
> > 
> > 2) host has cross-endian support and device is legacy virtio with a different
> >    endianness than the host
> > 
> > Both cases rely on the VHOST_SET_FEATURES ioctl, but 2) also needs the
> > VHOST_SET_VRING_ENDIAN ioctl to be called by userspace. Since vq->is_le
> > is only needed when the backend is active, it was decided to set it at
> > backend start.
> > 
> > This is currently done in vhost_init_used()->vhost_init_is_le() but it
> > obfuscates the core vhost code. This patch moves the is_le setup to a
> > dedicated function that is called from the backend code.
> > 
> > Note vhost_net is the only backend that can pass vq->private_data == NULL to
> > vhost_init_used(), hence the "if (sock)" branch.
> > 
> > No behaviour change.
> > 
> > Signed-off-by: Greg Kurz <gkurz@...ux.vnet.ibm.com>
> 
> I plan to look at this next week, busy with QEMU 2.5 now.
> 

I don't have any deadline for this since this is only a cleanup tentative.

Thanks.

> > ---
> >  drivers/vhost/net.c   |    6 ++++++
> >  drivers/vhost/scsi.c  |    3 +++
> >  drivers/vhost/test.c  |    2 ++
> >  drivers/vhost/vhost.c |   12 +++++++-----
> >  drivers/vhost/vhost.h |    1 +
> >  5 files changed, 19 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > index 9eda69e40678..d6319cb2664c 100644
> > --- a/drivers/vhost/net.c
> > +++ b/drivers/vhost/net.c
> > @@ -917,6 +917,12 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
> >  
> >  		vhost_net_disable_vq(n, vq);
> >  		vq->private_data = sock;
> > +
> > +		if (sock)
> > +			vhost_set_is_le(vq);
> > +		else
> > +			vq->is_le = virtio_legacy_is_little_endian();
> > +
> >  		r = vhost_init_used(vq);
> >  		if (r)
> >  			goto err_used;
> > diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> > index e25a23692822..e2644a301fa5 100644
> > --- a/drivers/vhost/scsi.c
> > +++ b/drivers/vhost/scsi.c
> > @@ -1276,6 +1276,9 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
> >  			vq = &vs->vqs[i].vq;
> >  			mutex_lock(&vq->mutex);
> >  			vq->private_data = vs_tpg;
> > +
> > +			vhost_set_is_le(vq);
> > +
> >  			vhost_init_used(vq);
> >  			mutex_unlock(&vq->mutex);
> >  		}
> > diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> > index f2882ac98726..b1c7df502211 100644
> > --- a/drivers/vhost/test.c
> > +++ b/drivers/vhost/test.c
> > @@ -196,6 +196,8 @@ static long vhost_test_run(struct vhost_test *n, int test)
> >  		oldpriv = vq->private_data;
> >  		vq->private_data = priv;
> >  
> > +		vhost_set_is_le(vq);
> > +
> >  		r = vhost_init_used(&n->vqs[index]);
> >  
> >  		mutex_unlock(&vq->mutex);
> > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > index eec2f11809ff..6be863dcbd13 100644
> > --- a/drivers/vhost/vhost.c
> > +++ b/drivers/vhost/vhost.c
> > @@ -113,6 +113,12 @@ static void vhost_init_is_le(struct vhost_virtqueue *vq)
> >  }
> >  #endif /* CONFIG_VHOST_CROSS_ENDIAN_LEGACY */
> >  
> > +void vhost_set_is_le(struct vhost_virtqueue *vq)
> > +{
> > +	vhost_init_is_le(vq);
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_set_is_le);
> > +
> >  static void vhost_poll_func(struct file *file, wait_queue_head_t *wqh,
> >  			    poll_table *pt)
> >  {
> > @@ -1156,12 +1162,8 @@ int vhost_init_used(struct vhost_virtqueue *vq)
> >  {
> >  	__virtio16 last_used_idx;
> >  	int r;
> > -	if (!vq->private_data) {
> > -		vq->is_le = virtio_legacy_is_little_endian();
> > +	if (!vq->private_data)
> >  		return 0;
> > -	}
> > -
> > -	vhost_init_is_le(vq);
> >  
> >  	r = vhost_update_used_flags(vq);
> >  	if (r)
> > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> > index 4772862b71a7..8a62041959fe 100644
> > --- a/drivers/vhost/vhost.h
> > +++ b/drivers/vhost/vhost.h
> > @@ -162,6 +162,7 @@ bool vhost_enable_notify(struct vhost_dev *, struct vhost_virtqueue *);
> >  
> >  int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
> >  		    unsigned int log_num, u64 len);
> > +void vhost_set_is_le(struct vhost_virtqueue *vq);
> >  
> >  #define vq_err(vq, fmt, ...) do {                                  \
> >  		pr_debug(pr_fmt(fmt), ##__VA_ARGS__);       \
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ