[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEv+-zbtPYsRam_8XB1hLCB-Gh5xaRLxpF_gLWZNnG2OEg@mail.gmail.com>
Date: Tue, 12 Aug 2025 11:02:16 +0800
From: Jason Wang <jasowang@...hat.com>
To: Eugenio Perez Martin <eperezma@...hat.com>
Cc: "Michael S . Tsirkin" <mst@...hat.com>, Cindy Lu <lulu@...hat.com>,
Yongji Xie <xieyongji@...edance.com>, Stefano Garzarella <sgarzare@...hat.com>,
virtualization@...ts.linux.dev, Laurent Vivier <lvivier@...hat.com>,
linux-kernel@...r.kernel.org, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Maxime Coquelin <mcoqueli@...hat.com>
Subject: Re: [RFC v2 4/7] vduse: return internal vq group struct as map token
On Mon, Aug 11, 2025 at 7:04 PM Eugenio Perez Martin
<eperezma@...hat.com> wrote:
>
> On Mon, Aug 11, 2025 at 5:11 AM Jason Wang <jasowang@...hat.com> wrote:
> >
> > On Thu, Aug 7, 2025 at 7:58 PM Eugenio Pérez <eperezma@...hat.com> wrote:
> > >
> > > Return the internal struct that represents the vq group as virtqueue map
> > > token, instead of the device.
> >
> > Note that Michael prefers to use the iova domain. This indeed seems to
> > be better.
> >
>
> Well iova domain would delete an indirection in the pointer chase, but
> it would be problematic to store the token in the caller.
>
> And we need to add some way to protect that the ASID of a vq group is
> not changed in the middle of the operation by an ioctl. IOW, the
> vq_group_internal struct pointer is constant for all the lifetime of
> the device, while iova_domain is not.
I will post a new version of DMA rework and switch to using the iova
domain there. Let's see if it works then.
>
> > > This allows the DMA functions to access
> >
> > s/DMA/map/
> >
>
> Ouch, thanks for the catch!
>
> > > the information per group.
> > >
> > > At this moment all the virtqueues share the same vq group, that only
> > > can point to ASID 0. This change prepares the infrastructure for actual
> > > per-group address space handling
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@...hat.com>
> >
> > Thanks
> >
>
Thanks
Powered by blists - more mailing lists