[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEswyCWbXnLnm-i5ydp27kmQDvxF3gEfHhU_t0HJ7g+4Wg@mail.gmail.com>
Date: Wed, 5 Jul 2023 15:32:04 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Si-Wei Liu <si-wei.liu@...cle.com>,
Eugenio Pérez <eperezma@...hat.com>,
linux-kernel@...r.kernel.org, Dragos Tatulea <dtatulea@...dia.com>,
virtualization@...ts.linux-foundation.org, leiyang@...hat.com,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Subject: Re: [PATCH] mlx5_vdpa: offer VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK
On Wed, Jul 5, 2023 at 2:16 PM Michael S. Tsirkin <mst@...hat.com> wrote:
>
> On Wed, Jul 05, 2023 at 01:47:44PM +0800, Jason Wang wrote:
> > On Wed, Jul 5, 2023 at 1:31 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> > >
> > > On Wed, Jul 05, 2023 at 01:11:37PM +0800, Jason Wang wrote:
> > > > On Tue, Jul 4, 2023 at 6:16 PM Michael S. Tsirkin <mst@...hat.com> wrote:
> > > > >
> > > > > On Mon, Jul 03, 2023 at 05:26:02PM -0700, Si-Wei Liu wrote:
> > > > > >
> > > > > >
> > > > > > On 7/3/2023 8:46 AM, Michael S. Tsirkin wrote:
> > > > > > > On Mon, Jul 03, 2023 at 04:25:14PM +0200, Eugenio Pérez wrote:
> > > > > > > > Offer this backend feature as mlx5 is compatible with it. It allows it
> > > > > > > > to do live migration with CVQ, dynamically switching between passthrough
> > > > > > > > and shadow virtqueue.
> > > > > > > >
> > > > > > > > Signed-off-by: Eugenio Pérez <eperezma@...hat.com>
> > > > > > > Same comment.
> > > > > > to which?
> > > > > >
> > > > > > -Siwei
> > > > >
> > > > > VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is too narrow a use-case to commit to it
> > > > > as a kernel/userspace ABI: what if one wants to start rings in some
> > > > > other specific order?
> > > >
> > > > Just enable a queue by writing e.g 1 to queue_enable in a specific order?
> > >
> > >
> > > But then at driver ok time we don't know how many queues are there.
> >
> > There should be a device specific interface for this, for example,
> > num_queue_paris. So the device knows at most how many queues there
> > are. Or anything I miss?
>
> That's a device limitations. Does not tell the device how much is used.
I think I miss something, how kick differs from queue_enable in this way?
>
> > >
> > > > > As was discussed on list, a better promise is not to access ring
> > > > > until the 1st kick. vdpa can then do a kick when it wants
> > > > > the device to start accessing rings.
> > > >
> > > > Rethink about the ACCESS_AFTER_KICK, it sounds functional equivalent
> > > > to allow queue_enable after DRIVER_OK, but it seems to have
> > > > distanvages:
> > > >
> > > > A busy polling software device may disable notifications and never
> > > > respond to register to any kick notifiers. ACCESS_AFTER_KICK will
> > > > introduce overhead to those implementations.
> > > >
> > > > Thanks
> > >
> > > It's just the 1st kick, then you can disable. No?
> >
> > Yes, but:
> >
> > 1) adding hooks for queue_enable
> > 2) adding new codes to register event notifier and toggle the notifier
> >
> > 1) seems much easier? Or for most devices, it already behaves like this.
> >
> > Thanks
>
> Well libvhostuser checks enabled queues at DRIVER_OK, does it not?
Probably, but I meant:
1) This behaviour has been supported by some device (e.g MLX)
2) This is the current behaviour of Qemu for vhost-net devices:
static void virtio_net_queue_enable(VirtIODevice *vdev, uint32_t queue_index)
{
VirtIONet *n = VIRTIO_NET(vdev);
NetClientState *nc;
int r;
....
if (get_vhost_net(nc->peer) &&
nc->peer->info->type == NET_CLIENT_DRIVER_TAP) {
r = vhost_net_virtqueue_restart(vdev, nc, queue_index);
if (r < 0) {
error_report("unable to restart vhost net virtqueue: %d, "
"when resetting the queue", queue_index);
}
}
}
Thanks
>
> > >
> > > > >
> > > > > > >
> > > > > > > > ---
> > > > > > > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 7 +++++++
> > > > > > > > 1 file changed, 7 insertions(+)
> > > > > > > >
> > > > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > > index 9138ef2fb2c8..5f309a16b9dc 100644
> > > > > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > > > @@ -7,6 +7,7 @@
> > > > > > > > #include <uapi/linux/virtio_net.h>
> > > > > > > > #include <uapi/linux/virtio_ids.h>
> > > > > > > > #include <uapi/linux/vdpa.h>
> > > > > > > > +#include <uapi/linux/vhost_types.h>
> > > > > > > > #include <linux/virtio_config.h>
> > > > > > > > #include <linux/auxiliary_bus.h>
> > > > > > > > #include <linux/mlx5/cq.h>
> > > > > > > > @@ -2499,6 +2500,11 @@ static void unregister_link_notifier(struct mlx5_vdpa_net *ndev)
> > > > > > > > flush_workqueue(ndev->mvdev.wq);
> > > > > > > > }
> > > > > > > > +static u64 mlx5_vdpa_get_backend_features(const struct vdpa_device *vdpa)
> > > > > > > > +{
> > > > > > > > + return BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK);
> > > > > > > > +}
> > > > > > > > +
> > > > > > > > static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
> > > > > > > > {
> > > > > > > > struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > > > > > > > @@ -3140,6 +3146,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> > > > > > > > .get_vq_align = mlx5_vdpa_get_vq_align,
> > > > > > > > .get_vq_group = mlx5_vdpa_get_vq_group,
> > > > > > > > .get_device_features = mlx5_vdpa_get_device_features,
> > > > > > > > + .get_backend_features = mlx5_vdpa_get_backend_features,
> > > > > > > > .set_driver_features = mlx5_vdpa_set_driver_features,
> > > > > > > > .get_driver_features = mlx5_vdpa_get_driver_features,
> > > > > > > > .set_config_cb = mlx5_vdpa_set_config_cb,
> > > > > > > > --
> > > > > > > > 2.39.3
> > > > >
> > >
>
Powered by blists - more mailing lists