[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9ac9fc4b-5c39-2503-dfbb-660a7bdcfbfd@redhat.com>
Date: Wed, 29 May 2019 11:22:40 +0800
From: Jason Wang <jasowang@...hat.com>
To: Stefano Garzarella <sgarzare@...hat.com>, netdev@...r.kernel.org
Cc: linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
Stefan Hajnoczi <stefanha@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
"Michael S . Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> We flush all pending works before to call vdev->config->reset(vdev),
> but other works can be queued before the vdev->config->del_vqs(vdev),
> so we add another flush after it, to avoid use after free.
>
> Suggested-by: Michael S. Tsirkin <mst@...hat.com>
> Signed-off-by: Stefano Garzarella <sgarzare@...hat.com>
> ---
> net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> 1 file changed, 17 insertions(+), 6 deletions(-)
>
> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index e694df10ab61..ad093ce96693 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> return ret;
> }
>
> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> +{
> + flush_work(&vsock->loopback_work);
> + flush_work(&vsock->rx_work);
> + flush_work(&vsock->tx_work);
> + flush_work(&vsock->event_work);
> + flush_work(&vsock->send_pkt_work);
> +}
> +
> static void virtio_vsock_remove(struct virtio_device *vdev)
> {
> struct virtio_vsock *vsock = vdev->priv;
> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> mutex_lock(&the_virtio_vsock_mutex);
> the_virtio_vsock = NULL;
>
> - flush_work(&vsock->loopback_work);
> - flush_work(&vsock->rx_work);
> - flush_work(&vsock->tx_work);
> - flush_work(&vsock->event_work);
> - flush_work(&vsock->send_pkt_work);
> -
> /* Reset all connected sockets when the device disappear */
> vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>
> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> vsock->event_run = false;
> mutex_unlock(&vsock->event_lock);
>
> + /* Flush all pending works */
> + virtio_vsock_flush_works(vsock);
> +
> /* Flush all device writes and interrupts, device will not use any
> * more buffers.
> */
> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> /* Delete virtqueues and flush outstanding callbacks if any */
> vdev->config->del_vqs(vdev);
>
> + /* Other works can be queued before 'config->del_vqs()', so we flush
> + * all works before to free the vsock object to avoid use after free.
> + */
> + virtio_vsock_flush_works(vsock);
Some questions after a quick glance:
1) It looks to me that the work could be queued from the path of
vsock_transport_cancel_pkt() . Is that synchronized here?
2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run
still needed? It looks to me we've already done except that we need
flush rx_work in the end since send_pkt_work can requeue rx_work.
Thanks
> +
> kfree(vsock);
> mutex_unlock(&the_virtio_vsock_mutex);
> }
Powered by blists - more mailing lists