[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190701151113.GE11900@stefanha-x1.localdomain>
Date: Mon, 1 Jul 2019 16:11:13 +0100
From: Stefan Hajnoczi <stefanha@...il.com>
To: Stefano Garzarella <sgarzare@...hat.com>
Cc: netdev@...r.kernel.org, kvm@...r.kernel.org,
"Michael S. Tsirkin" <mst@...hat.com>,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
Stefan Hajnoczi <stefanha@...hat.com>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH v2 0/3] vsock/virtio: several fixes in the .probe() and
.remove()
On Fri, Jun 28, 2019 at 02:36:56PM +0200, Stefano Garzarella wrote:
> During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> before registering the driver", Stefan pointed out some possible issues
> in the .probe() and .remove() callbacks of the virtio-vsock driver.
>
> This series tries to solve these issues:
> - Patch 1 adds RCU critical sections to avoid use-after-free of
> 'the_virtio_vsock' pointer.
> - Patch 2 stops workers before to call vdev->config->reset(vdev) to
> be sure that no one is accessing the device.
> - Patch 3 moves the works flush at the end of the .remove() to avoid
> use-after-free of 'vsock' object.
>
> v2:
> - Patch 1: use RCU to protect 'the_virtio_vsock' pointer
> - Patch 2: no changes
> - Patch 3: flush works only at the end of .remove()
> - Removed patch 4 because virtqueue_detach_unused_buf() returns all the buffers
> allocated.
>
> v1: https://patchwork.kernel.org/cover/10964733/
This looks good to me.
Did you run any stress tests? For example an SMP guest constantly
connecting and sending packets together with a script that
hotplug/unplugs vhost-vsock-pci from the host side.
Stefan
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists