[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEtyyPJQ3t_ckwZyNRHW2_fqm=09DEe-__Rvz0pQoUqtsQ@mail.gmail.com>
Date: Mon, 19 Jan 2026 16:34:11 +0800
From: Jason Wang <jasowang@...hat.com>
To: Eugenio Perez Martin <eperezma@...hat.com>
Cc: "Michael S . Tsirkin" <mst@...hat.com>, Laurent Vivier <lvivier@...hat.com>, linux-kernel@...r.kernel.org,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, Maxime Coquelin <mcoqueli@...hat.com>,
Cindy Lu <lulu@...hat.com>, virtualization@...ts.linux.dev,
Yongji Xie <xieyongji@...edance.com>, Stefano Garzarella <sgarzare@...hat.com>
Subject: Re: [PATCH v14 11/13] vduse: add vq group asid support
On Mon, Jan 19, 2026 at 4:10 PM Eugenio Perez Martin
<eperezma@...hat.com> wrote:
>
> On Mon, Jan 19, 2026 at 8:17 AM Jason Wang <jasowang@...hat.com> wrote:
> >
> > On Fri, Jan 16, 2026 at 10:05 PM Eugenio Pérez <eperezma@...hat.com> wrote:
> > >
> > > Add support for assigning Address Space Identifiers (ASIDs) to each VQ
> > > group. This enables mapping each group into a distinct memory space.
> > >
> > > The vq group to ASID association is protected by a rwlock now. But the
> > > mutex domain_lock keeps protecting the domains of all ASIDs, as some
> > > operations like the one related with the bounce buffer size still
> > > requires to lock all the ASIDs.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@...hat.com>
> > >
> > > ---
> > > Future improvements can include performance optimizations on top like
> > > ore to RCU or thread synchronized atomics, or hardening by tracking ASID
> > > or ASID hashes on unused bits of the DMA address.
> > >
> > > Tested virtio_vdpa by adding manually two threads in vduse_set_status:
> > > one of them modifies the vq group 0 ASID and the other one map and unmap
> > > memory continuously. After a while, the two threads stop and the usual
> > > work continues. Test with version 0, version 1 with the old ioctl, and
> > > version 1 with the new ioctl.
> > >
> > > Tested with vhost_vdpa by migrating a VM while ping on OVS+VDUSE. A few
> > > workaround were needed in some parts:
> > > * Do not enable CVQ before data vqs in QEMU, as VDUSE does not forward
> > > the enable message to the userland device. This will be solved in the
> > > future.
> > > * Share the suspended state between all vhost devices in QEMU:
> > > https://lists.nongnu.org/archive/html/qemu-devel/2025-11/msg02947.html
> > > * Implement a fake VDUSE suspend vdpa operation callback that always
> > > returns true in the kernel. DPDK suspend the device at the first
> > > GET_VRING_BASE.
> > > * Remove the CVQ blocker in ASID.
> > >
> > > The driver vhost_vdpa was also tested with version 0, version 1 with the
> > > old ioctl, version 1 with the new ioctl but only one ASID, and version 1
> > > with many ASID.
> > >
> >
> > Looks good overall, but I spot a small issue:
> >
> > int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
> > struct page **pages, int count)
> > {
> > struct vduse_bounce_map *map, *head_map;
> > ...
> >
> > /* Now we don't support partial mapping */
> > if (count != (domain->bounce_size >> PAGE_SHIFT))
> > return -EINVAL;
> >
> > Here we still use domain->bounce_size even if we support multiple as,
> > this conflicts with the case without userspace memory.
> >
>
> I don't follow you. My understanding from the previous discussion is
> that the bounce size is distributed evenly per AS. Should we just have
> a global bounce buffer size and protect that the amount of added
> memory of all domains is less than that bounce size?
I meant we require bounce_size / nas to be the size of the bounce
buffer size of each AS.
But for userspace registered memory, it requires bounce_size per AS.
Thanks
>
Powered by blists - more mailing lists