[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4c881585-8fee-0a53-865c-05d41ffb8ed1@redhat.com>
Date: Thu, 30 May 2019 19:59:14 +0800
From: Jason Wang <jasowang@...hat.com>
To: Stefano Garzarella <sgarzare@...hat.com>,
"Michael S . Tsirkin" <mst@...hat.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
Stefan Hajnoczi <stefanha@...hat.com>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()
On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
>> On 2019/5/29 下午6:58, Stefano Garzarella wrote:
>>> On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
>>>> On 2019/5/28 下午6:56, Stefano Garzarella wrote:
>>>>> We flush all pending works before to call vdev->config->reset(vdev),
>>>>> but other works can be queued before the vdev->config->del_vqs(vdev),
>>>>> so we add another flush after it, to avoid use after free.
>>>>>
>>>>> Suggested-by: Michael S. Tsirkin <mst@...hat.com>
>>>>> Signed-off-by: Stefano Garzarella <sgarzare@...hat.com>
>>>>> ---
>>>>> net/vmw_vsock/virtio_transport.c | 23 +++++++++++++++++------
>>>>> 1 file changed, 17 insertions(+), 6 deletions(-)
>>>>>
>>>>> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>>>> index e694df10ab61..ad093ce96693 100644
>>>>> --- a/net/vmw_vsock/virtio_transport.c
>>>>> +++ b/net/vmw_vsock/virtio_transport.c
>>>>> @@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
>>>>> return ret;
>>>>> }
>>>>> +static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
>>>>> +{
>>>>> + flush_work(&vsock->loopback_work);
>>>>> + flush_work(&vsock->rx_work);
>>>>> + flush_work(&vsock->tx_work);
>>>>> + flush_work(&vsock->event_work);
>>>>> + flush_work(&vsock->send_pkt_work);
>>>>> +}
>>>>> +
>>>>> static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> {
>>>>> struct virtio_vsock *vsock = vdev->priv;
>>>>> @@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> mutex_lock(&the_virtio_vsock_mutex);
>>>>> the_virtio_vsock = NULL;
>>>>> - flush_work(&vsock->loopback_work);
>>>>> - flush_work(&vsock->rx_work);
>>>>> - flush_work(&vsock->tx_work);
>>>>> - flush_work(&vsock->event_work);
>>>>> - flush_work(&vsock->send_pkt_work);
>>>>> -
>>>>> /* Reset all connected sockets when the device disappear */
>>>>> vsock_for_each_connected_socket(virtio_vsock_reset_sock);
>>>>> @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> vsock->event_run = false;
>>>>> mutex_unlock(&vsock->event_lock);
>>>>> + /* Flush all pending works */
>>>>> + virtio_vsock_flush_works(vsock);
>>>>> +
>>>>> /* Flush all device writes and interrupts, device will not use any
>>>>> * more buffers.
>>>>> */
>>>>> @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
>>>>> /* Delete virtqueues and flush outstanding callbacks if any */
>>>>> vdev->config->del_vqs(vdev);
>>>>> + /* Other works can be queued before 'config->del_vqs()', so we flush
>>>>> + * all works before to free the vsock object to avoid use after free.
>>>>> + */
>>>>> + virtio_vsock_flush_works(vsock);
>>>> Some questions after a quick glance:
>>>>
>>>> 1) It looks to me that the work could be queued from the path of
>>>> vsock_transport_cancel_pkt() . Is that synchronized here?
>>>>
>>> Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
>>> queue work from the upper layer (socket).
>>>
>>> Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
>>> a rare issue could happen:
>>> we are setting the_virtio_vsock to NULL at the start of .remove() and we
>>> are freeing the object pointed by it at the end of .remove(), so
>>> virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
>>> running, accessing the object that we are freed.
>>
>> Yes, that's my point.
>>
>>
>>> Should I use something like RCU to prevent this issue?
>>>
>>> virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
>>> {
>>> rcu_read_lock();
>>> vsock = rcu_dereference(the_virtio_vsock_mutex);
>>
>> RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
>>
> Okay, I'm going this way.
>
>>> ...
>>> rcu_read_unlock();
>>> }
>>>
>>> virtio_vsock_remove()
>>> {
>>> rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
>>> synchronize_rcu();
>>>
>>> ...
>>>
>>> free(vsock);
>>> }
>>>
>>> Could there be a better approach?
>>>
>>>
>>>> 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
>>>> needed? It looks to me we've already done except that we need flush rx_work
>>>> in the end since send_pkt_work can requeue rx_work.
>>> The main reason of tx_run/rx_run/event_run is to prevent that a worker
>>> function is running while we are calling config->reset().
>>>
>>> E.g. if an interrupt comes between virtio_vsock_flush_works() and
>>> config->reset(), it can queue new works that can access the device while
>>> we are in config->reset().
>>>
>>> IMHO they are still needed.
>>>
>>> What do you think?
>>
>> I mean could we simply do flush after reset once and without tx_rx/rx_run
>> tricks?
>>
>> rest();
>>
>> virtio_vsock_flush_work();
>>
>> virtio_vsock_free_buf();
> My only doubt is:
> is it safe to call config->reset() while a worker function could access
> the device?
>
> I had this doubt reading the Michael's advice[1] and looking at
> virtnet_remove() where there are these lines before the config->reset():
>
> /* Make sure no work handler is accessing the device. */
> flush_work(&vi->config_work);
>
> Thanks,
> Stefano
>
> [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
Good point. Then I agree with you. But if we can use the RCU to detect
the detach of device from socket for these, it would be even better.
Thanks
Powered by blists - more mailing lists