lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEto2vc3rYO7aKJvqgRFE6QFDrtxRbHhXrVb=91vvqJ9=w@mail.gmail.com>
Date: Thu, 18 Sep 2025 14:01:14 +0800
From: Jason Wang <jasowang@...hat.com>
To: Eugenio Perez Martin <eperezma@...hat.com>
Cc: "Michael S . Tsirkin" <mst@...hat.com>, Stefano Garzarella <sgarzare@...hat.com>, 
	Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, linux-kernel@...r.kernel.org, 
	Maxime Coquelin <mcoqueli@...hat.com>, Yongji Xie <xieyongji@...edance.com>, 
	Cindy Lu <lulu@...hat.com>, Laurent Vivier <lvivier@...hat.com>, virtualization@...ts.linux.dev
Subject: Re: [PATCH v2 4/7] vduse: return internal vq group struct as map token

On Thu, Sep 18, 2025 at 12:17 AM Eugenio Perez Martin
<eperezma@...hat.com> wrote:
>
> On Wed, Sep 17, 2025 at 10:37 AM Jason Wang <jasowang@...hat.com> wrote:
> >
> > On Tue, Sep 16, 2025 at 9:09 PM Eugenio Pérez <eperezma@...hat.com> wrote:
> > >
> > > Return the internal struct that represents the vq group as virtqueue map
> > > token, instead of the device.  This allows the map functions to access
> > > the information per group.
> > >
> > > At this moment all the virtqueues share the same vq group, that only
> > > can point to ASID 0.  This change prepares the infrastructure for actual
> > > per-group address space handling
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@...hat.com>
> > > ---
> > > RFC v3:
> > > * Make the vq groups a dynamic array to support an arbitrary number of
> > >   them.
> > > ---
> > >  drivers/vdpa/vdpa_user/vduse_dev.c | 52 ++++++++++++++++++++++++------
> > >  include/linux/virtio.h             |  6 ++--
> > >  2 files changed, 46 insertions(+), 12 deletions(-)
> > >
> > > diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > index 42f8807911d4..9c12ae72abc2 100644
> > > --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> > > +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> > > @@ -23,6 +23,7 @@
> > >  #include <linux/uio.h>
> > >  #include <linux/vdpa.h>
> > >  #include <linux/nospec.h>
> > > +#include <linux/virtio.h>
> > >  #include <linux/vmalloc.h>
> > >  #include <linux/sched/mm.h>
> > >  #include <uapi/linux/vduse.h>
> > > @@ -85,6 +86,10 @@ struct vduse_umem {
> > >         struct mm_struct *mm;
> > >  };
> > >
> > > +struct vduse_vq_group_int {
> > > +       struct vduse_dev *dev;
> > > +};
> >
> > I remember we had some discussion over this, and the conclusion is to
> > have a better name.
> >
> > Maybe just vduse_vq_group?
> >
>
> Good catch, I also hate the _int suffix :). vduse_vq_group was used in
> the vduse uapi in previous series, but now there is no reason for it.
> Replacing it, thanks!
>
> > And to be conceptually correct, we need to use iova_domain here
> > instead of the vduse_dev. More below.
> >
> > > +
> > >  struct vduse_dev {
> > >         struct vduse_vdpa *vdev;
> > >         struct device *dev;
> > > @@ -118,6 +123,7 @@ struct vduse_dev {
> > >         u32 vq_align;
> > >         u32 ngroups;
> > >         struct vduse_umem *umem;
> > > +       struct vduse_vq_group_int *groups;
> > >         struct mutex mem_lock;
> > >         unsigned int bounce_size;
> > >         rwlock_t domain_lock;
> > > @@ -602,6 +608,15 @@ static u32 vduse_get_vq_group(struct vdpa_device *vdpa, u16 idx)
> > >         return dev->vqs[idx]->vq_group;
> > >  }
> > >
> > > +static union virtio_map vduse_get_vq_map(struct vdpa_device *vdpa, u16 idx)
> > > +{
> > > +       struct vduse_dev *dev = vdpa_to_vduse(vdpa);
> > > +       u32 vq_group = dev->vqs[idx]->vq_group;
> > > +       union virtio_map ret = { .group = &dev->groups[vq_group] };
> > > +
> > > +       return ret;
> > > +}
> > > +
> > >  static int vduse_vdpa_get_vq_state(struct vdpa_device *vdpa, u16 idx,
> > >                                 struct vdpa_vq_state *state)
> > >  {
> > > @@ -822,6 +837,7 @@ static const struct vdpa_config_ops vduse_vdpa_config_ops = {
> > >         .get_vq_affinity        = vduse_vdpa_get_vq_affinity,
> > >         .reset                  = vduse_vdpa_reset,
> > >         .set_map                = vduse_vdpa_set_map,
> > > +       .get_vq_map             = vduse_get_vq_map,
> > >         .free                   = vduse_vdpa_free,
> > >  };
> > >
> > > @@ -829,7 +845,8 @@ static void vduse_dev_sync_single_for_device(union virtio_map token,
> > >                                              dma_addr_t dma_addr, size_t size,
> > >                                              enum dma_data_direction dir)
> > >  {
> > > -       struct vduse_iova_domain *domain = token.iova_domain;
> > > +       struct vduse_dev *vdev = token.group->dev;
> > > +       struct vduse_iova_domain *domain = vdev->domain;
> >
> > If we really want to do this, we need to move the iova_domian into the group.
> >
>
> It's done in patches on top to make each patch smaller. This patch is
> focused on just changing the type of the union. Would you prefer me to
> reorder the patches so that part is done earlier?

I think it would be better for logical completeness.

Thanks


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ