[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACGkMEtKZE2NQMoY8quO=Y+g=b0fMrkzg64AZ3O5w901yU9bFQ@mail.gmail.com>
Date: Tue, 3 Feb 2026 11:27:17 +0800
From: Jason Wang <jasowang@...hat.com>
To: Zhang Tianci <zhangtianci.1997@...edance.com>
Cc: mst@...hat.com, xuanzhuo@...ux.alibaba.com, eperezma@...hat.com,
marco.crivellari@...e.com, anders.roxell@...aro.org,
virtualization@...ts.linux.dev, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, Xie Yongji <xieyongji@...edance.com>
Subject: Re: [PATCH v2] vduse: Fix race in vduse_dev_msg_sync and vduse_dev_read_iter
On Tue, Feb 3, 2026 at 11:23 AM Jason Wang <jasowang@...hat.com> wrote:
>
> On Mon, Feb 2, 2026 at 11:13 AM Zhang Tianci
> <zhangtianci.1997@...edance.com> wrote:
> >
> > There is one race case in vduse_dev_msg_sync and vduse_dev_read_iter:
> >
> > vduse_dev_read_iter():
> > lock(msg_lock);
> > dequeue_msg(send_list);
> > unlock(msg_lock);
> > vduse_dev_msg_sync():
> > wait_timeout() finish
> > lock(msg_lock);
> > check msg->complete is false
> > list_del(msg); <- double list_del() crash!
> >
> > To fix this case, we shall ensure vduse_msg is on send_list or recv_list
> > outside the msg_lock critical section.
> >
> > Fixes: c8a6153b6c59 ("vduse: Introduce VDUSE - vDPA Device in Userspace")
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Zhang Tianci <zhangtianci.1997@...edance.com>
> > Reviewed-by: Xie Yongji <xieyongji@...edance.com>
> > ---
> > v2:
> > - Rewrite commit message. [Michael]
> > - Add Fixes tag and cc stable email list. [Eugenio]
> > - Rewrite one comment. [Michael]
> >
> > v1: https://lkml.org/lkml/2026/1/30/323
> >
> > drivers/vdpa/vdpa_user/vduse_dev.c | 30 ++++++++++++++++++++++--------
> > 1 file changed, 22 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> > index ae357d014564c..a70d0580d54e8 100644
> > --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> > @@ -325,6 +325,7 @@ static ssize_t vduse_dev_read_iter(struct kiocb *iocb, struct iov_iter *to)
> > struct file *file = iocb->ki_filp;
> > struct vduse_dev *dev = file->private_data;
> > struct vduse_dev_msg *msg;
> > + struct vduse_dev_request req;
> > int size = sizeof(struct vduse_dev_request);
> > ssize_t ret;
> >
> > @@ -339,7 +340,7 @@ static ssize_t vduse_dev_read_iter(struct kiocb *iocb, struct iov_iter *to)
> >
> > ret = -EAGAIN;
> > if (file->f_flags & O_NONBLOCK)
> > - goto unlock;
> > + break;
> >
> > spin_unlock(&dev->msg_lock);
> > ret = wait_event_interruptible_exclusive(dev->waitq,
> > @@ -349,17 +350,30 @@ static ssize_t vduse_dev_read_iter(struct kiocb *iocb, struct iov_iter *to)
> >
> > spin_lock(&dev->msg_lock);
> > }
> > + if (!msg) {
> > + spin_unlock(&dev->msg_lock);
> > + return ret;
> > + }
>
> Nit: this check seems to be redundant, I'd suggest to
>
> 1) move the spin_unlock() before the check of file->f_flags & O_NONBLOCK
> 2) then we can simply do "return ret" when it's a nonblocking read.
>
> > +
> > + memcpy(&req, &msg->req, sizeof(req));
> > + /*
> > + * We must ensure vduse_msg is on send_list or recv_list before unlock
> > + * dev->msg_lock. Because vduse_dev_msg_sync() may be timeout when we
> > + * copy data to userspace, and will call list_del() for this msg.
> > + */
> > + vduse_enqueue_msg(&dev->recv_list, msg);
> > spin_unlock(&dev->msg_lock);
> > - ret = copy_to_iter(&msg->req, size, to);
> > - spin_lock(&dev->msg_lock);
> > +
> > + ret = copy_to_iter(&req, size, to);
> > if (ret != size) {
Btw, I would like to explain why it's still safe if a (malicious)
userspace writes in this window in either commit log or here.
> > + spin_lock(&dev->msg_lock);
> > + /* Roll back: move msg back to send_list if still pending. */
> > + msg = vduse_find_msg(&dev->recv_list, req.request_id);
> > + if (msg)
> > + vduse_enqueue_msg(&dev->send_list, msg);
> > + spin_unlock(&dev->msg_lock);
> > ret = -EFAULT;
> > - vduse_enqueue_msg(&dev->send_list, msg);
> > - goto unlock;
> > }
> > - vduse_enqueue_msg(&dev->recv_list, msg);
> > -unlock:
> > - spin_unlock(&dev->msg_lock);
> >
> > return ret;
> > }
> > --
> > 2.39.5
> >
>
> Thanks
Thanks
Powered by blists - more mailing lists