[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251016022131-mutt-send-email-mst@kernel.org>
Date: Thu, 16 Oct 2025 02:22:03 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: Eugenio Perez Martin <eperezma@...hat.com>,
Maxime Coquelin <mcoqueli@...hat.com>,
Yongji Xie <xieyongji@...edance.com>,
virtualization@...ts.linux.dev, linux-kernel@...r.kernel.org,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Dragos Tatulea DE <dtatulea@...dia.com>
Subject: Re: [RFC 1/2] virtio_net: timeout control virtqueue commands
On Thu, Oct 16, 2025 at 02:03:57PM +0800, Jason Wang wrote:
> On Thu, Oct 16, 2025 at 1:45 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> >
> > On Thu, Oct 16, 2025 at 01:39:58PM +0800, Jason Wang wrote:
> > > > >
> > > > > Not exactly bufferize, record. E.g. we do not need to send
> > > > > 100 messages to enable/disable promisc mode - together they
> > > > > have no effect.
> > >
> > > Note that there's a case that multiple commands need to be sent, e.g
> > > set rx mode. And assuming not all the commands are the best effort,
> > > kernel VDUSE still needs to wait for the usersapce at least for a
> > > while.
> >
> > Not wait, record. Generate 1st command, after userspace consumed it -
> > generate and send second command and so on.
>
> Right, that's what I asked in another thread, we still need a timeout
> here.
we do not need a timeout.
> Then I think it would not be too much difference whether it is
> VDUSE or CVQ that will fail or break the device. Conceptually, VDUSE
> can only advertise NEEDS_RESET since it's a device implementation.
> VDUSE can not simply break the device as it requires synchronization
> which is not easy.
>
> > But for each bit of data, at most one command has to be sent,
> > we do not care if guest tweaked rx mode 3 times, we only care about
> > the latest state.
>
> Yes, but I want to know what's best when VDUSE meets userspace timeout.
>
> Thanks
userspace should manage its own timeouts.
> >
> > --
> > MST
> >
Powered by blists - more mailing lists