lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190531081824.p6ylsgvkrbckhqpx@steredhat>
Date:   Fri, 31 May 2019 10:18:24 +0200
From:   Stefano Garzarella <sgarzare@...hat.com>
To:     Jason Wang <jasowang@...hat.com>
Cc:     "Michael S . Tsirkin" <mst@...hat.com>, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
        Stefan Hajnoczi <stefanha@...hat.com>,
        "David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

On Thu, May 30, 2019 at 07:59:14PM +0800, Jason Wang wrote:
> 
> On 2019/5/30 下午6:10, Stefano Garzarella wrote:
> > On Thu, May 30, 2019 at 05:46:18PM +0800, Jason Wang wrote:
> > > On 2019/5/29 下午6:58, Stefano Garzarella wrote:
> > > > On Wed, May 29, 2019 at 11:22:40AM +0800, Jason Wang wrote:
> > > > > On 2019/5/28 下午6:56, Stefano Garzarella wrote:
> > > > > > @@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > >     	vsock->event_run = false;
> > > > > >     	mutex_unlock(&vsock->event_lock);
> > > > > > +	/* Flush all pending works */
> > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > > +
> > > > > >     	/* Flush all device writes and interrupts, device will not use any
> > > > > >     	 * more buffers.
> > > > > >     	 */
> > > > > > @@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
> > > > > >     	/* Delete virtqueues and flush outstanding callbacks if any */
> > > > > >     	vdev->config->del_vqs(vdev);
> > > > > > +	/* Other works can be queued before 'config->del_vqs()', so we flush
> > > > > > +	 * all works before to free the vsock object to avoid use after free.
> > > > > > +	 */
> > > > > > +	virtio_vsock_flush_works(vsock);
> > > > > Some questions after a quick glance:
> > > > > 
> > > > > 1) It looks to me that the work could be queued from the path of
> > > > > vsock_transport_cancel_pkt() . Is that synchronized here?
> > > > > 
> > > > Both virtio_transport_send_pkt() and vsock_transport_cancel_pkt() can
> > > > queue work from the upper layer (socket).
> > > > 
> > > > Setting the_virtio_vsock to NULL, should synchronize, but after a careful look
> > > > a rare issue could happen:
> > > > we are setting the_virtio_vsock to NULL at the start of .remove() and we
> > > > are freeing the object pointed by it at the end of .remove(), so
> > > > virtio_transport_send_pkt() or vsock_transport_cancel_pkt() may still be
> > > > running, accessing the object that we are freed.
> > > 
> > > Yes, that's my point.
> > > 
> > > 
> > > > Should I use something like RCU to prevent this issue?
> > > > 
> > > >       virtio_transport_send_pkt() and vsock_transport_cancel_pkt()
> > > >       {
> > > >           rcu_read_lock();
> > > >           vsock = rcu_dereference(the_virtio_vsock_mutex);
> > > 
> > > RCU is probably a way to go. (Like what vhost_transport_send_pkt() did).
> > > 
> > Okay, I'm going this way.
> > 
> > > >           ...
> > > >           rcu_read_unlock();
> > > >       }
> > > > 
> > > >       virtio_vsock_remove()
> > > >       {
> > > >           rcu_assign_pointer(the_virtio_vsock_mutex, NULL);
> > > >           synchronize_rcu();
> > > > 
> > > >           ...
> > > > 
> > > >           free(vsock);
> > > >       }
> > > > 
> > > > Could there be a better approach?
> > > > 
> > > > 
> > > > > 2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run still
> > > > > needed? It looks to me we've already done except that we need flush rx_work
> > > > > in the end since send_pkt_work can requeue rx_work.
> > > > The main reason of tx_run/rx_run/event_run is to prevent that a worker
> > > > function is running while we are calling config->reset().
> > > > 
> > > > E.g. if an interrupt comes between virtio_vsock_flush_works() and
> > > > config->reset(), it can queue new works that can access the device while
> > > > we are in config->reset().
> > > > 
> > > > IMHO they are still needed.
> > > > 
> > > > What do you think?
> > > 
> > > I mean could we simply do flush after reset once and without tx_rx/rx_run
> > > tricks?
> > > 
> > > rest();
> > > 
> > > virtio_vsock_flush_work();
> > > 
> > > virtio_vsock_free_buf();
> > My only doubt is:
> > is it safe to call config->reset() while a worker function could access
> > the device?
> > 
> > I had this doubt reading the Michael's advice[1] and looking at
> > virtnet_remove() where there are these lines before the config->reset():
> > 
> > 	/* Make sure no work handler is accessing the device. */
> > 	flush_work(&vi->config_work);
> > 
> > Thanks,
> > Stefano
> > 
> > [1] https://lore.kernel.org/netdev/20190521055650-mutt-send-email-mst@kernel.org
> 
> 
> Good point. Then I agree with you. But if we can use the RCU to detect the
> detach of device from socket for these, it would be even better.
> 

What about checking 'the_virtio_vsock' in the worker functions in a RCU
critical section?
In this way, I can remove the rx_run/tx_run/event_run.

Do you think it's cleaner?

Thank you very much,
Stefano

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ