[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7fc396d5-e2bd-b126-b3a6-88f8033c14b4@linux.intel.com>
Date: Fri, 11 Aug 2023 10:21:20 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Jason Gunthorpe <jgg@...pe.ca>
Cc: baolu.lu@...ux.intel.com, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Kevin Tian <kevin.tian@...el.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Nicolin Chen <nicolinc@...dia.com>,
Yi Liu <yi.l.liu@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
iommu@...ts.linux.dev, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 10/12] iommu: Make iommu_queue_iopf() more generic
On 2023/8/11 3:07, Jason Gunthorpe wrote:
> On Thu, Jul 27, 2023 at 01:48:35PM +0800, Lu Baolu wrote:
>> @@ -137,6 +136,16 @@ int iommu_queue_iopf(struct iommu_fault *fault, struct device *dev)
>> return 0;
>> }
>>
>> + if (fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)
>> + domain = iommu_get_domain_for_dev_pasid(dev, fault->prm.pasid, 0);
>> + else
>> + domain = iommu_get_domain_for_dev(dev);
>
> How does the lifetime work for this? What prevents UAF on domain?
Replied below.
>
>> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
>> index ab42cfdd7636..668f4c2bcf65 100644
>> --- a/drivers/iommu/iommu-sva.c
>> +++ b/drivers/iommu/iommu-sva.c
>> @@ -157,7 +157,7 @@ EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
>> /*
>> * I/O page fault handler for SVA
>> */
>> -enum iommu_page_response_code
>> +static enum iommu_page_response_code
>> iommu_sva_handle_iopf(struct iommu_fault *fault, void *data)
>> {
>> vm_fault_t ret;
>> @@ -241,23 +241,16 @@ static void iopf_handler(struct work_struct *work)
>> {
>> struct iopf_fault *iopf;
>> struct iopf_group *group;
>> - struct iommu_domain *domain;
>> enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS;
>>
>> group = container_of(work, struct iopf_group, work);
>> - domain = iommu_get_domain_for_dev_pasid(group->dev,
>> - group->last_fault.fault.prm.pasid, 0);
>> - if (!domain || !domain->iopf_handler)
>> - status = IOMMU_PAGE_RESP_INVALID;
>> -
>> list_for_each_entry(iopf, &group->faults, list) {
>> /*
>> * For the moment, errors are sticky: don't handle subsequent
>> * faults in the group if there is an error.
>> */
>> if (status == IOMMU_PAGE_RESP_SUCCESS)
>> - status = domain->iopf_handler(&iopf->fault,
>> - domain->fault_data);
>> + status = iommu_sva_handle_iopf(&iopf->fault, group->data);
>> }
>>
>> iopf_complete_group(group->dev, &group->last_fault, status);
>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>> index 157a28a49473..535a36e3edc9 100644
>> --- a/drivers/iommu/iommu.c
>> +++ b/drivers/iommu/iommu.c
>> @@ -3330,7 +3330,7 @@ struct iommu_domain *iommu_sva_domain_alloc(struct device *dev,
>> domain->type = IOMMU_DOMAIN_SVA;
>> mmgrab(mm);
>> domain->mm = mm;
>> - domain->iopf_handler = iommu_sva_handle_iopf;
>> + domain->iopf_handler = iommu_sva_handle_iopf_group;
>> domain->fault_data = mm;
>
> This also has lifetime problems on the mm.
>
> The domain should flow into the iommu_sva_handle_iopf() instead of the
> void *data.
Okay, but I still want to keep void *data as a private pointer of the
iopf consumer. For SVA, it's probably NULL.
>
> The SVA code can then just use domain->mm directly.
Yes.
>
> We need to document/figure out some how to ensure that the faults are
> all done processing before a fault enabled domain can be freed.
This has been documented in drivers/iommu/io-pgfault.c:
[...]
* Any valid page fault will be eventually routed to an iommu domain
and the
* page fault handler installed there will get called. The users of this
* handling framework should guarantee that the iommu domain could only be
* freed after the device has stopped generating page faults (or the iommu
* hardware has been set to block the page faults) and the pending page
faults
* have been flushed.
*
* Return: 0 on success and <0 on error.
*/
int iommu_queue_iopf(struct iommu_fault *fault, void *cookie)
[...]
> This patch would be better ordered before the prior patch.
Let me try this in the next version.
Best regards,
baolu
Powered by blists - more mailing lists