[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPpAL=whwmwGqoxZ3M8y+fpnOXEHum+z9-Krs6ZOBLu4o8COZw@mail.gmail.com>
Date: Mon, 21 Jul 2025 22:52:25 +0800
From: Lei Yang <leiyang@...hat.com>
To: Nikolay Kuratov <kniv@...dex-team.ru>
Cc: Hillf Danton <hdanton@...a.com>, linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev, "Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>, Eugenio Perez <eperezma@...hat.com>,
Andrey Ryabinin <arbn@...dex-team.com>, Andrey Smetanin <asmetanin@...dex-team.ru>
Subject: Re: [PATCH v2] vhost/net: Replace wait_queue with completion in ubufs reference
Tested this patch's V2 with the virtio-net regression test, everything
works fine.
Tested-by: Lei Yang <leiyang@...hat.com>
On Mon, Jul 21, 2025 at 12:13 AM Michael S. Tsirkin <mst@...hat.com> wrote:
>
> On Sat, Jul 19, 2025 at 07:03:23AM +0800, Hillf Danton wrote:
> > On Fri, 18 Jul 2025 14:03:55 +0300 Nikolay Kuratov wrote:
> > > When operating on struct vhost_net_ubuf_ref, the following execution
> > > sequence is theoretically possible:
> > > CPU0 is finalizing DMA operation CPU1 is doing VHOST_NET_SET_BACKEND
> > > // &ubufs->refcount == 2
> > > vhost_net_ubuf_put() vhost_net_ubuf_put_wait_and_free(oldubufs)
> > > vhost_net_ubuf_put_and_wait()
> > > vhost_net_ubuf_put()
> > > int r = atomic_sub_return(1, &ubufs->refcount);
> > > // r = 1
> > > int r = atomic_sub_return(1, &ubufs->refcount);
> > > // r = 0
> > > wait_event(ubufs->wait, !atomic_read(&ubufs->refcount));
> > > // no wait occurs here because condition is already true
> > > kfree(ubufs);
> > > if (unlikely(!r))
> > > wake_up(&ubufs->wait); // use-after-free
> > >
> > > This leads to use-after-free on ubufs access. This happens because CPU1
> > > skips waiting for wake_up() when refcount is already zero.
> > >
> > > To prevent that use a completion instead of wait_queue as the ubufs
> > > notification mechanism. wait_for_completion() guarantees that there will
> > > be complete() call prior to its return.
> > >
> > Alternatively rcu helps.
> >
> > --- x/drivers/vhost/net.c
> > +++ y/drivers/vhost/net.c
> > @@ -96,6 +96,7 @@ struct vhost_net_ubuf_ref {
> > atomic_t refcount;
> > wait_queue_head_t wait;
> > struct vhost_virtqueue *vq;
> > + struct rcu_head rcu;
> > };
> >
> > #define VHOST_NET_BATCH 64
> > @@ -247,9 +248,13 @@ vhost_net_ubuf_alloc(struct vhost_virtqu
> >
> > static int vhost_net_ubuf_put(struct vhost_net_ubuf_ref *ubufs)
> > {
> > - int r = atomic_sub_return(1, &ubufs->refcount);
> > + int r;
> > +
> > + rcu_read_lock();
> > + r = atomic_sub_return(1, &ubufs->refcount);
> > if (unlikely(!r))
> > wake_up(&ubufs->wait);
> > + rcu_read_unlock();
> > return r;
> > }
> >
> > @@ -262,7 +267,7 @@ static void vhost_net_ubuf_put_and_wait(
> > static void vhost_net_ubuf_put_wait_and_free(struct vhost_net_ubuf_ref *ubufs)
> > {
> > vhost_net_ubuf_put_and_wait(ubufs);
> > - kfree(ubufs);
> > + kfree_rcu(ubufs, rcu);
> > }
> >
> > static void vhost_net_clear_ubuf_info(struct vhost_net *n)
>
> I like that.
>
> --
> MST
>
Powered by blists - more mailing lists