lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zrzkwu7srmLTch+a@Asurada-Nvidia>
Date: Wed, 14 Aug 2024 10:09:22 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: <jgg@...dia.com>, <kevin.tian@...el.com>, <will@...nel.org>
CC: <joro@...tes.org>, <suravee.suthikulpanit@....com>,
	<robin.murphy@....com>, <dwmw2@...radead.org>, <baolu.lu@...ux.intel.com>,
	<shuah@...nel.org>, <linux-kernel@...r.kernel.org>, <iommu@...ts.linux.dev>,
	<linux-arm-kernel@...ts.infradead.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v1 05/16] iommufd/viommu: Add
 IOMMU_VIOMMU_SET/UNSET_VDEV_ID ioctl

On Wed, Aug 07, 2024 at 01:10:46PM -0700, Nicolin Chen wrote:
> @@ -135,7 +135,14 @@ void iommufd_device_destroy(struct iommufd_object *obj)
>  {
>         struct iommufd_device *idev =
>                 container_of(obj, struct iommufd_device, obj);
> +       struct iommufd_vdev_id *vdev_id, *curr;
> 
> +       list_for_each_entry(vdev_id, &idev->vdev_id_list, idev_item) {
> +               curr = xa_cmpxchg(&vdev_id->viommu->vdev_ids, vdev_id->vdev_id,
> +                                 vdev_id, NULL, GFP_KERNEL);
> +               WARN_ON(curr != vdev_id);
> +               kfree(vdev_id);
> +       }

Kevin already pointed out previously during the RFC review that
we probably should do one vdev_id per idev. And Jason expressed
okay to either way. I didn't plan to change this part until this
week for the VIRQ series.

My rethinking is that an idev is attached to one (and only one)
nested HWPT. The nested HWPT is associated to one (and only one)
VIOMMU object. So, it's unlikely we can a second vdev_id, i.e.
idev->vdev_id is enough.

This helps us to build a device-based virq report function:
+void iommufd_device_report_virq(struct device *dev, unsigned int data_type,
+                               void *data_ptr, size_t data_len);

I built a link from device to viommu reusing Baolu's work:
struct device -> struct iommu_group -> struct iommu_attach_handle
-> struct iommufd_attach_handle -> struct iommufd_device (idev)
-> struct iommufd_vdev_id (idev->vdev_id)

The vdev_id struct holds viommu and virtual ID, so allowing us
to add another two helpers:
+struct iommufd_viommu *iommufd_device_get_viommu(struct device *dev);
+u64 iommufd_device_get_virtual_id(struct device *dev);

A driver that reports event/irq per device can use these helpers
to report virq via the core-managed VIOMMU object. (If a driver
has some non-per-device type of IRQs, it would have to allocate
a driver-managed VIOMMU object instead.)

I have both a revised VIOMMU series and a new VIRQ series ready.
Will send in the following days after some testing/polishing.

Thanks
Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ