[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEsqD1qAgt8qfV=fwj1OeBeXzoOF1wXdqzJaWYR2A=C+UA@mail.gmail.com>
Date: Wed, 15 Oct 2025 12:44:47 +0800
From: Jason Wang <jasowang@...hat.com>
To: Maxime Coquelin <mcoqueli@...hat.com>
Cc: "Michael S. Tsirkin" <mst@...hat.com>, Eugenio Pérez <eperezma@...hat.com>,
Yongji Xie <xieyongji@...edance.com>, virtualization@...ts.linux.dev,
linux-kernel@...r.kernel.org, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Dragos Tatulea DE <dtatulea@...dia.com>
Subject: Re: [RFC 1/2] virtio_net: timeout control virtqueue commands
On Tue, Oct 14, 2025 at 6:21 PM Maxime Coquelin <mcoqueli@...hat.com> wrote:
>
> On Tue, Oct 14, 2025 at 11:25 AM Michael S. Tsirkin <mst@...hat.com> wrote:
> >
> > On Tue, Oct 14, 2025 at 11:14:40AM +0200, Maxime Coquelin wrote:
> > > On Tue, Oct 14, 2025 at 10:29 AM Michael S. Tsirkin <mst@...hat.com> wrote:
> > > >
> > > > On Tue, Oct 07, 2025 at 03:06:21PM +0200, Eugenio Pérez wrote:
> > > > > An userland device implemented through VDUSE could take rtnl forever if
> > > > > the virtio-net driver is running on top of virtio_vdpa. Let's break the
> > > > > device if it does not return the buffer in a longer-than-assumible
> > > > > timeout.
> > > >
> > > > So now I can't debug qemu with gdb because guest dies :(
> > > > Let's not break valid use-cases please.
> > > >
> > > >
> > > > Instead, solve it in vduse, probably by handling cvq within
> > > > kernel.
> > >
> > > Would a shadow control virtqueue implementation in the VDUSE driver work?
> > > It would ack systematically messages sent by the Virtio-net driver,
> > > and so assume the userspace application will Ack them.
> > >
> > > When the userspace application handles the message, if the handling fails,
> > > it somehow marks the device as broken?
> > >
> > > Thanks,
> > > Maxime
> >
> > Yes but it's a bit more convoluted than just acking them.
> > Once you use the buffer you can get another one and so on
> > with no limit.
> > One fix is to actually maintain device state in the
> > kernel, update it, and then notify userspace.
>
> I agree, this is the way to go.
>
> Thanks for your insights,
> Maxime
A timeout still needs to be considered in this case. Or I may miss something?
Thanks
>
> >
> >
> > > >
> > > > > A less agressive path can be taken to recover the device, like only
> > > > > resetting the control virtqueue. However, the state of the device after
> > > > > this action is taken races, as the vq could be reset after the device
> > > > > writes the OK. Leaving TODO anyway.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@...hat.com>
> > > > > ---
> > > > > drivers/net/virtio_net.c | 10 ++++++++++
> > > > > 1 file changed, 10 insertions(+)
> > > > >
> > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > index 31bd32bdecaf..ed68ad69a019 100644
> > > > > --- a/drivers/net/virtio_net.c
> > > > > +++ b/drivers/net/virtio_net.c
> > > > > @@ -3576,6 +3576,7 @@ static bool virtnet_send_command_reply(struct virtnet_info *vi, u8 class, u8 cmd
> > > > > {
> > > > > struct scatterlist *sgs[5], hdr, stat;
> > > > > u32 out_num = 0, tmp, in_num = 0;
> > > > > + unsigned long end_time;
> > > > > bool ok;
> > > > > int ret;
> > > > >
> > > > > @@ -3614,11 +3615,20 @@ static bool virtnet_send_command_reply(struct virtnet_info *vi, u8 class, u8 cmd
> > > > >
> > > > > /* Spin for a response, the kick causes an ioport write, trapping
> > > > > * into the hypervisor, so the request should be handled immediately.
> > > > > + *
> > > > > + * Long timeout so a malicious device is not able to lock rtnl forever.
> > > > > */
> > > > > + end_time = jiffies + 30 * HZ;
> > > > > while (!virtqueue_get_buf(vi->cvq, &tmp) &&
> > > > > !virtqueue_is_broken(vi->cvq)) {
> > > > > cond_resched();
> > > > > cpu_relax();
> > > > > +
> > > > > + if (time_after(end_time, jiffies)) {
> > > > > + /* TODO Reset vq if possible? */
> > > > > + virtio_break_device(vi->vdev);
> > > > > + break;
> > > > > + }
> > > > > }
> > > > >
> > > > > unlock:
> > > > > --
> > > > > 2.51.0
> > > >
> >
>
Powered by blists - more mailing lists