lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52FC97E9.2080808@redhat.com>
Date:	Thu, 13 Feb 2014 18:01:13 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>,
	linux-kernel@...r.kernel.org, David Miller <davem@...emloft.net>
CC:	kvm@...r.kernel.org, virtio-dev@...ts.oasis-open.org,
	virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org
Subject: Re: [virtio-dev] [PATCH net v2] vhost: fix a theoretical race in
 device cleanup

On 02/13/2014 05:45 PM, Michael S. Tsirkin wrote:
> vhost_zerocopy_callback accesses VQ right after it drops a ubuf
> reference.  In theory, this could race with device removal which waits
> on the ubuf kref, and crash on use after free.
>
> Do all accesses within rcu read side critical section, and synchronize
> on release.
>
> Since callbacks are always invoked from bh, synchronize_rcu_bh seems
> enough and will help release complete a bit faster.
>
> Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
> ---
>
> This is was previously posted as part of patch
> series, but it's an independent fix really.
> Theoretical race so not needed for stable I think.
>
> changes from v1:
> 	fixed typo in commit log
>
>  drivers/vhost/net.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index b12176f..f1be80d 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -308,6 +308,8 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
>  	struct vhost_virtqueue *vq = ubufs->vq;
>  	int cnt;
>  
> +	rcu_read_lock_bh();
> +
>  	/* set len to mark this desc buffers done DMA */
>  	vq->heads[ubuf->desc].len = success ?
>  		VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN;
> @@ -322,6 +324,8 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
>  	 */
>  	if (cnt <= 1 || !(cnt % 16))
>  		vhost_poll_queue(&vq->poll);
> +
> +	rcu_read_unlock_bh();
>  }
>  
>  /* Expects to be always run from workqueue - which acts as
> @@ -804,6 +808,8 @@ static int vhost_net_release(struct inode *inode, struct file *f)
>  		fput(tx_sock->file);
>  	if (rx_sock)
>  		fput(rx_sock->file);
> +	/* Make sure no callbacks are outstanding */
> +	synchronize_rcu_bh();
>  	/* We do an extra flush before freeing memory,
>  	 * since jobs can re-queue themselves. */
>  	vhost_net_flush(n);

Acked-by: Jason Wang <jasowang@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ