[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEuNZzDVccF_yiinJowrfGgWRAR_-ZvOqNmFz=cLVKN-+w@mail.gmail.com>
Date: Tue, 3 Feb 2026 12:05:07 +0800
From: Jason Wang <jasowang@...hat.com>
To: Eugenio Perez Martin <eperezma@...hat.com>
Cc: "Michael S . Tsirkin" <mst@...hat.com>, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Cindy Lu <lulu@...hat.com>, Laurent Vivier <lvivier@...hat.com>,
Stefano Garzarella <sgarzare@...hat.com>, linux-kernel@...r.kernel.org,
Maxime Coquelin <mcoqueli@...hat.com>, Yongji Xie <xieyongji@...edance.com>,
virtualization@...ts.linux.dev
Subject: Re: [PATCH 1/6] vduse: ensure vq->ready access is smp safe
On Fri, Jan 30, 2026 at 3:56 PM Eugenio Perez Martin
<eperezma@...hat.com> wrote:
>
> On Fri, Jan 30, 2026 at 3:18 AM Jason Wang <jasowang@...hat.com> wrote:
> >
> > On Thu, Jan 29, 2026 at 2:21 PM Eugenio Perez Martin
> > <eperezma@...hat.com> wrote:
> > >
> > > On Thu, Jan 29, 2026 at 2:17 AM Jason Wang <jasowang@...hat.com> wrote:
> > > >
> > > > On Wed, Jan 28, 2026 at 8:45 PM Eugenio Pérez <eperezma@...hat.com> wrote:
> > > > >
> > > > > The vduse_vdpa_set_vq_ready can be called in the lifetime of the device
> > > > > well after initial setup, and the device can read it afterwards.
> > > > >
> > > > > Ensure that reads and writes to vq->ready are SMP safe so that the
> > > > > caller can trust that virtqueue kicks and calls behave as expected
> > > > > immediately after the operation returns.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@...hat.com>
> > > > > ---
> > > > > drivers/vdpa/vdpa_user/vduse_dev.c | 34 +++++++++++++++++++++++-------
> > > > > 1 file changed, 26 insertions(+), 8 deletions(-)
> > > > >
> > > > > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > > index 73d1d517dc6c..a4963aaf9332 100644
> > > > > --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > > > @@ -460,6 +460,24 @@ static __poll_t vduse_dev_poll(struct file *file, poll_table *wait)
> > > > > return mask;
> > > > > }
> > > > >
> > > > > +static bool vduse_vq_get_ready(const struct vduse_virtqueue *vq)
> > > > > +{
> > > > > + /*
> > > > > + * Paired with vduse_vq_set_ready smp_store, as the driver may modify
> > > > > + * it while the VDUSE instance is reading it.
> > > > > + */
> > > > > + return smp_load_acquire(&vq->ready);
> > > > > +}
> > > > > +
> > > > > +static void vduse_vq_set_ready(struct vduse_virtqueue *vq, bool ready)
> > > > > +{
> > > > > + /*
> > > > > + * Paired with vduse_vq_get_ready smp_load, as the driver may modify
> > > > > + * it while the VDUSE instance is reading it.
> > > > > + */
> > > > > + smp_store_release(&vq->ready, ready);
> > > >
> > > > Assuming this is not used in the datapath, I wonder if we can simply
> > > > use vq_lock mutex.
> > > >
> > >
> > > The function vduse_vq_set/get_ready are not in the datapath, but
> > > vduse_vq_kick and vduse_vq_signal_irqfd are. I'm ok if you want to
> > > switch to vq_mutex if you want though, maybe it's even comparable with
> > > the cost of the ioctls or eventfd signaling.
> >
> > I'd like to use mutex for simplicity.
> >
>
> I cannot move it to a mutex, as we need to take it in the critical
> sections of the kick_lock and irq_lock spinlocks.
>
> I can move it to a spinlock, but it seems more complicated to me. We
> need to make sure we always take these kick_lock and irq_lock in the
> same order as the ready_lock, to not create deadlocks, and they always
> protect just the ready boolean. But sure, I'll send the spinlock
> version for V2.
Thinking about this, I'm not sure I understand the issue.
Maybe you can give me an example of the race.
THanks
>
Powered by blists - more mailing lists