[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkbnvnaiV9nCHQOb@nvidia.com>
Date: Thu, 16 May 2024 22:14:38 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: <will@...nel.org>, <robin.murphy@....com>, <kevin.tian@...el.com>,
<suravee.suthikulpanit@....com>, <joro@...tes.org>,
<linux-kernel@...r.kernel.org>, <iommu@...ts.linux.dev>,
<linux-arm-kernel@...ts.infradead.org>, <linux-tegra@...r.kernel.org>,
<yi.l.liu@...el.com>, <eric.auger@...hat.com>, <vasant.hegde@....com>,
<jon.grimm@....com>, <santosh.shukla@....com>, <Dhaval.Giani@....com>,
<shameerali.kolothum.thodi@...wei.com>
Subject: Re: [PATCH RFCv1 08/14] iommufd: Add IOMMU_VIOMMU_SET_DEV_ID ioctl
On Sun, May 12, 2024 at 11:58:27AM -0300, Jason Gunthorpe wrote:
> On Fri, Apr 12, 2024 at 08:47:05PM -0700, Nicolin Chen wrote:
> > Introduce a new ioctl to set a per-viommu device virtual id that should be
> > linked to the physical device id (or just a struct device pointer).
> >
> > Since a viommu (user space IOMMU instance) can have multiple devices while
> > it's not ideal to confine a device to one single user space IOMMU instance
> > either, these two shouldn't just do a 1:1 mapping. Add two xarrays in their
> > structures to bind them bidirectionally.
>
> Since I would like to retain the dev_id, I think this is probably
> better done with an allocated struct per dev-id:
>
> struct dev_id {
> struct iommufd_device *idev;
> struct iommufd_viommu *viommu;
> u64 vdev_id;
> u64 driver_private; // Ie the driver can store the pSID here
> struct list_head idev_entry;
> };
I implemented it with a small tweak, to align with viommu_alloc
and vqueue_alloc:
// core
struct iommufd_vdev_id {
struct iommufd_viommu *viommu;
struct device *dev;
u64 vdev_id;
struct list_head idev_item;
};
// driver
struct my_driver_vdev_id {
struct iommufd_vdev_id core;
unsigned int private_attrs;
};
static struct iommufd_vdev_id *
my_driver_set_vdev_id(struct iommufd_viommu *viommu,
struct device *dev, u64 id)
{
struct my_driver_vdev_id *my_vdev_id;
my_vdev_id = kzalloc(sizeof(*my_vdev_id), GFP_KERNEL);
.... /* set private_attrs */
return &my_driver_vdev_id->core;
}
static void my_driver_unset_vdev_id(struct iommufd_vdev_id *vdev_id)
{
struct my_driver_vdev_id *my_vdev_id =
container_of(vdev_id, struct my_driver_vdev_id, core);
.... /* unset private_attrs */
}
Please let me know if you like it inverted as you wrote above.
> > @@ -135,7 +135,16 @@ void iommufd_device_destroy(struct iommufd_object *obj)
> > {
> > struct iommufd_device *idev =
> > container_of(obj, struct iommufd_device, obj);
> > + struct iommufd_viommu *viommu;
> > + unsigned long index;
> >
> > + xa_for_each(&idev->viommus, index, viommu) {
> > + if (viommu->ops->unset_dev_id)
> > + viommu->ops->unset_dev_id(viommu, idev->dev);
> > + xa_erase(&viommu->idevs, idev->obj.id);
> > + xa_erase(&idev->viommus, index);
> > + }
>
> Then this turns into list_for_each(idev->viommu_vdevid_list)
Done.
> > +int iommufd_viommu_set_device_id(struct iommufd_ucmd *ucmd)
> > +{
..
> > + rc = xa_alloc(&idev->viommus, &viommu_id, viommu,
> > + XA_LIMIT(viommu->obj.id, viommu->obj.id),
> > + GFP_KERNEL_ACCOUNT);
> > + if (rc)
> > + goto out_put_viommu;
> > +
> > + rc = xa_alloc(&viommu->idevs, &dev_id, idev,
> > + XA_LIMIT(dev_id, dev_id), GFP_KERNEL_ACCOUNT);
> > + if (rc)
> > + goto out_xa_erase_viommu;
>
> Both of these are API mis-uses, you don't want an allocating xarray
> you just want to use xa_cmpxchg
>
> Put an xarray in the viommu object and fill it with pointers to the
> struct dev_id thing above
>
> The driver can piggyback on this xarray too if it wants, so we only
> need one.
I also moved xa_cmpxchg to the driver, as I feel that the vdev_id
here is very driver/iommu specific. There can be some complication
if iommufd core handles this u64 vdev_id: most likely we will use
this u64 vdev_id to index the xarray that takes an unsigned-long
xa_index for a fast vdev_id-to-pdev_id lookup, while only a driver
knows whether u64 vdev_id is compatible with unsigned long or not.
And, we have a list_head in the structure idev, so a device unbind
will for-each the list and unset all the vdev_ids in it, meanwhile
the viommu stays. I wonder if we need to add another list_head in
the structure viommu, so a viommu tear down will for-each its list
and unset all the vdev_ids on its side while a device (idev) stays.
I don't see a use case of that though..any thought?
Thanks
Nicolin
Powered by blists - more mailing lists