[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250915184200-mutt-send-email-mst@kernel.org>
Date: Mon, 15 Sep 2025 18:42:20 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Eugenio Pérez <eperezma@...hat.com>
Cc: Cindy Lu <lulu@...hat.com>, Stefano Garzarella <sgarzare@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Laurent Vivier <lvivier@...hat.com>, virtualization@...ts.linux.dev,
linux-kernel@...r.kernel.org, jasowang@...hat.com,
Yongji Xie <xieyongji@...edance.com>,
Maxime Coquelin <mcoqueli@...hat.com>
Subject: Re: [PATCH 0/6] Add multiple address spaces support to VDUSE
On Tue, Aug 26, 2025 at 01:27:03PM +0200, Eugenio Pérez wrote:
> When used by vhost-vDPA bus driver for VM, the control virtqueue
> should be shadowed via userspace VMM (QEMU) instead of being assigned
> directly to Guest. This is because QEMU needs to know the device state
> in order to start and stop device correctly (e.g for Live Migration).
>
> This requies to isolate the memory mapping for control virtqueue
> presented by vhost-vDPA to prevent guest from accessing it directly.
>
> This series add support to multiple address spaces in VDUSE device
> allowing selective virtqueue isolation through address space IDs (ASID).
There hasn't been a new version of this yet, has there?
> The VDUSE device needs to report:
> * Number of virtqueue groups
> * Association of each vq group with each virtqueue
> * Number of address spaces supported.
>
> Then, the vDPA driver can modify the ASID assigned to each VQ group to
> isolate the memory AS. This aligns VDUSE with gq}vdpa_sim and nvidia
> mlx5 devices which already support ASID.
>
> This helps to isolate the environments for the virtqueues that will not
> be assigned directly. E.g in the case of virtio-net, the control
> virtqueue will not be assigned directly to guest.
>
> This series depends on the series that reworks the virtio mapping API:
> https://lore.kernel.org/all/20250821064641.5025-1-jasowang@redhat.com/
>
> Also, to be able to test this patch, the user needs to manually revert
> 56e71885b034 ("vduse: Temporarily fail if control queue feature requested").
>
> PATCH v1:
> * Fix: Remove BIT_ULL(VIRTIO_S_*), as _S_ is already the bit (Maxime)
> * Using vduse_vq_group_int directly instead of an empty struct in union
> virtio_map.
>
> RFC v3:
> * Increase VDUSE_MAX_VQ_GROUPS to 0xffff (Jason). It was set to a lower
> value to reduce memory consumption, but vqs are already limited to
> that value and userspace VDUSE is able to allocate that many vqs. Also, it's
> a dynamic array now. Same with ASID.
> * Move the valid vq groups range check to vduse_validate_config.
> * Embed vduse_iotlb_entry into vduse_iotlb_entry_v2.
> * Use of array_index_nospec in VDUSE device ioctls.
> * Move the umem mutex to asid struct so there is no contention between
> ASIDs.
> * Remove the descs vq group capability as it will not be used and we can
> add it on top.
> * Do not ask for vq groups in number of vq groups < 2.
> * Remove TODO about merging VDUSE_IOTLB_GET_FD ioctl with
> VDUSE_IOTLB_GET_INFO.
>
> RFC v2:
> * Cache group information in kernel, as we need to provide the vq map
> tokens properly.
> * Add descs vq group to optimize SVQ forwarding and support indirect
> descriptors out of the box.
> * Make iotlb entry the last one of vduse_iotlb_entry_v2 so the first
> part of the struct is the same.
> * Fixes detected testing with OVS+VDUSE.
>
> Eugenio Pérez (6):
> vduse: add v1 API definition
> vduse: add vq group support
> vduse: return internal vq group struct as map token
> vduse: create vduse_as to make it an array
> vduse: add vq group asid support
> vduse: bump version number
>
> drivers/vdpa/vdpa_user/vduse_dev.c | 385 ++++++++++++++++++++++-------
> include/linux/virtio.h | 6 +-
> include/uapi/linux/vduse.h | 73 +++++-
> 3 files changed, 373 insertions(+), 91 deletions(-)
>
> --
> 2.51.0
Powered by blists - more mailing lists