lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b740988-7fe6-4328-8ce2-d66d9a2ab497@linux.intel.com>
Date: Sat, 6 Apr 2024 12:34:14 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: baolu.lu@...ux.intel.com, Kevin Tian <kevin.tian@...el.com>,
 Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
 Robin Murphy <robin.murphy@....com>,
 Jean-Philippe Brucker <jean-philippe@...aro.org>,
 Nicolin Chen <nicolinc@...dia.com>, Yi Liu <yi.l.liu@...el.com>,
 Jacob Pan <jacob.jun.pan@...ux.intel.com>,
 Joel Granados <j.granados@...sung.com>, iommu@...ts.linux.dev,
 virtualization@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 1/9] iommu: Introduce domain attachment handle

On 4/3/24 7:58 PM, Jason Gunthorpe wrote:
> On Wed, Apr 03, 2024 at 09:15:11AM +0800, Lu Baolu wrote:
>> Currently, when attaching a domain to a device or its PASID, domain is
>> stored within the iommu group. It could be retrieved for use during the
>> window between attachment and detachment.
>>
>> With new features introduced, there's a need to store more information
>> than just a domain pointer. This information essentially represents the
>> association between a domain and a device. For example, the SVA code
>> already has a custom struct iommu_sva which represents a bond between
>> sva domain and a PASID of a device. Looking forward, the IOMMUFD needs
>> a place to store the iommufd_device pointer in the core, so that the
>> device object ID could be quickly retrieved in the critical fault handling
>> path.
>>
>> Introduce domain attachment handle that explicitly represents the
>> attachment relationship between a domain and a device or its PASID.
>> A caller-specific data field can be used by the caller to store additional
>> information beyond a domain pointer, depending on its specific use case.
>>
>> Co-developed-by: Jason Gunthorpe<jgg@...dia.com>
>> Signed-off-by: Jason Gunthorpe<jgg@...dia.com>
>> Signed-off-by: Lu Baolu<baolu.lu@...ux.intel.com>
>> ---
>>   drivers/iommu/iommu-priv.h |   9 +++
>>   drivers/iommu/iommu.c      | 158 +++++++++++++++++++++++++++++++++----
>>   2 files changed, 153 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/iommu/iommu-priv.h b/drivers/iommu/iommu-priv.h
>> index 5f731d994803..08c0667cef54 100644
>> --- a/drivers/iommu/iommu-priv.h
>> +++ b/drivers/iommu/iommu-priv.h
>> @@ -28,4 +28,13 @@ void iommu_device_unregister_bus(struct iommu_device *iommu,
>>   				 const struct bus_type *bus,
>>   				 struct notifier_block *nb);
>>   
>> +struct iommu_attach_handle {
>> +	struct iommu_domain		*domain;
>> +	refcount_t			users;
> I don't understand how the refcounting can be generally useful. There
> is no way to free this:
> 
>> +	void				*priv;
> When the refcount goes to zero.

This field is set by the caller, so the caller ensures that the pointer
can only be freed after iommu domain detachment. For iopf, the caller
should automatically respond to all outstanding iopf's in the domain
detach path.

In the sva case, which uses the workqueue to handle iopf,
flush_workqueue() is called in the domain detach path to ensure that all
outstanding iopf's are completed before detach completion.

For iommufd, perhaps I need to add the following code in the detach (and
the same to replace) paths:

--- a/drivers/iommu/iommufd/fault.c
+++ b/drivers/iommu/iommufd/fault.c
@@ -68,14 +68,35 @@ int iommufd_fault_domain_attach_dev(struct 
iommufd_hw_pagetable *hwpt,
         return 0;
  }

+static void iommufd_auto_response_handle(struct iommufd_fault *fault,
+                                        struct iommu_attach_handle *handle)
+{
+       struct iommufd_device *idev = handle->priv;
+       struct iopf_group *group;
+       unsigned long index;
+
+       mutex_lock(&fault->mutex);
+       xa_for_each(&idev->faults, index, group) {
+               xa_erase(&idev->faults, index);
+               iopf_group_response(group, IOMMU_PAGE_RESP_INVALID);
+       }
+       mutex_unlock(&fault->mutex);
+}
+
  void iommufd_fault_domain_detach_dev(struct iommufd_hw_pagetable *hwpt,
                                      struct iommufd_device *idev)
  {
+       struct iommufd_fault *fault = hwpt->fault;
+       struct iommu_attach_handle *handle;
+
         if (WARN_ON(!hwpt->fault_capable))
                 return;

+       handle = iommu_attach_handle_get(idev->igroup->group, 
IOMMU_NO_PASID);
         iommu_detach_group(hwpt->domain, idev->igroup->group);
         iommufd_fault_iopf_disable(idev);
+       iommufd_auto_response_handle(fault, handle);
+       iommu_attach_handle_put(handle);
  }

Best regards,
baolu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ