[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190529105832.oz3sagbne5teq3nt@steredhat>
Date: Wed, 29 May 2019 12:58:32 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
Stefan Hajnoczi <stefanha@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
"Michael S . Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>
> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > We flush all pending works before to call vdev->config->reset(vdev),
> > but other works can be queued before the vdev->config->del_vqs(vdev),
> > so we add another flush after it, to avoid use after free.
> >
> > Suggested-by: Michael S. Tsirkin <mst@...hat.com>
> > Signed-off-by: Stefano Garzarella <sgarzare@...hat.com>
> > ---
> > net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
> > 1 file changed, 17 insertions(+), 6 deletions(-)
> >
> > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > index e694df10ab61..ad093ce96693 100644
> > --- a/net/vmw_vsock/virtio_transport.c
> > +++ b/net/vmw_vsock/virtio_transport.c
> > @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
> > return ret;
> > }
> > +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
> > +{
> > + flush_work(&vsock->loopback_work);
> > + flush_work(&vsock->rx_work);
> > + flush_work(&vsock->tx_work);
> > + flush_work(&vsock->event_work);
> > + flush_work(&vsock->send_pkt_work);
> > +}
> > +
> > static void virtio_vsock_remove(struct virtio_device *vdev)
> > {
> > struct virtio_vsock *vsock = vdev->priv;
> > @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > mutex_lock(&the_virtio_vsock_mutex);
> > the_virtio_vsock = NULL;
> > - flush_work(&vsock->loopback_work);
> > - flush_work(&vsock->rx_work);
> > - flush_work(&vsock->tx_work);
> > - flush_work(&vsock->event_work);
> > - flush_work(&vsock->send_pkt_work);
> > -
> > /* Reset all connected sockets when the device disappear */
> > vsock_for_each_connected_socket(virtio_vsock_reset_sock);
> > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > vsock->event_run = false;
> > mutex_unlock(&vsock->event_lock);
> > + /* Flush all pending works */
> > + virtio_vsock_flush_works(vsock);
> > +
> > /* Flush all device writes and interrupts, device will not use any
> > * more buffers.
> > */
> > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > /* Delete virtqueues and flush outstanding callbacks if any */
> > vdev->config->del_vqs(vdev);
> > + /* Other works can be queued before 'config->del_vqs()', so we flush
> > + * all works before to free the vsock object to avoid use after free.
> > + */
> > + virtio_vsock_flush_works(vsock);
>
>
> Some questions after a quick glance:
>
> 1) It looks to me that the work could be queued from the path of
> vsock_transport_cancel_pkt() . Is that synchronized here?
>
Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
queue work from the upper layer (socket).
Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
a rare issue could happen:
we are setting the_virtio_vsock to NULL at the start of .remove() and we
are freeing the object pointed by it at the end of .remove(), so
virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
running, accessing the object that we are freed.
Should I use something like RCU to prevent this issue?
virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
{
rcu_read_lock();
vsock = rcu_dereference(the_virtio_vsock_mutex);
...
rcu_read_unlock();
}
virtio_vsock_remove()
{
rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
synchronize_rcu();
...
free(vsock);
}
Could there be a better approach?
> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> needed? It looks to me we've already done except that we need flush rx_work
> in the end since send_pkt_work can requeue rx_work.
The main reason of tx_run/rx_run/event_run is to prevent that a worker
function is running while we are calling config->reset().
E.g. if an interrupt comes between virtio_vsock_flush_works() and
config->reset(), it can queue new works that can access the device while
we are in config->reset().
IMHO they are still needed.
What do you think?
Thanks for your questions,
Stefano
Powered by blists - more mailing lists