[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <faacb997-0c0a-04e1-903d-53857f463214@linux.intel.com>
Date: Mon, 27 Jun 2022 21:03:52 +0800
From: Ethan Zhao <haifeng.zhao@...ux.intel.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>,
Joerg Roedel <joro@...tes.org>,
Jason Gunthorpe <jgg@...dia.com>,
Christoph Hellwig <hch@...radead.org>,
Kevin Tian <kevin.tian@...el.com>,
Ashok Raj <ashok.raj@...el.com>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Jean-Philippe Brucker <jean-philippe@...aro.com>,
Dave Jiang <dave.jiang@...el.com>,
Vinod Koul <vkoul@...nel.org>
Cc: Eric Auger <eric.auger@...hat.com>, Liu Yi L <yi.l.liu@...el.com>,
Jacob jun Pan <jacob.jun.pan@...el.com>,
iommu@...ts.linux-foundation.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org,
Jean-Philippe Brucker <jean-philippe@...aro.org>
Subject: Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling
Hi,
在 2022/6/21 22:43, Lu Baolu 写道:
> Tweak the I/O page fault handling framework to route the page faults to
> the domain and call the page fault handler retrieved from the domain.
> This makes the I/O page fault handling framework possible to serve more
> usage scenarios as long as they have an IOMMU domain and install a page
> fault handler in it. Some unused functions are also removed to avoid
> dead code.
>
> The iommu_get_domain_for_dev_pasid() which retrieves attached domain
> for a {device, PASID} pair is used. It will be used by the page fault
> handling framework which knows {device, PASID} reported from the iommu
> driver. We have a guarantee that the SVA domain doesn't go away during
> IOPF handling, because unbind() waits for pending faults with
> iopf_queue_flush_dev() before freeing the domain. Hence, there's no need
> to synchronize life cycle of the iommu domains between the unbind() and
> the interrupt threads.
>
> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
> Reviewed-by: Jean-Philippe Brucker <jean-philippe@...aro.org>
> ---
> drivers/iommu/io-pgfault.c | 64 +++++---------------------------------
> 1 file changed, 7 insertions(+), 57 deletions(-)
>
> diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
> index aee9e033012f..4f24ec703479 100644
> --- a/drivers/iommu/io-pgfault.c
> +++ b/drivers/iommu/io-pgfault.c
> @@ -69,69 +69,18 @@ static int iopf_complete_group(struct device *dev, struct iopf_fault *iopf,
> return iommu_page_response(dev, &resp);
> }
>
> -static enum iommu_page_response_code
> -iopf_handle_single(struct iopf_fault *iopf)
> -{
> - vm_fault_t ret;
> - struct mm_struct *mm;
> - struct vm_area_struct *vma;
> - unsigned int access_flags = 0;
> - unsigned int fault_flags = FAULT_FLAG_REMOTE;
> - struct iommu_fault_page_request *prm = &iopf->fault.prm;
> - enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID;
> -
> - if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID))
> - return status;
> -
> - mm = iommu_sva_find(prm->pasid);
> - if (IS_ERR_OR_NULL(mm))
> - return status;
> -
> - mmap_read_lock(mm);
> -
> - vma = find_extend_vma(mm, prm->addr);
> - if (!vma)
> - /* Unmapped area */
> - goto out_put_mm;
> -
> - if (prm->perm & IOMMU_FAULT_PERM_READ)
> - access_flags |= VM_READ;
> -
> - if (prm->perm & IOMMU_FAULT_PERM_WRITE) {
> - access_flags |= VM_WRITE;
> - fault_flags |= FAULT_FLAG_WRITE;
> - }
> -
> - if (prm->perm & IOMMU_FAULT_PERM_EXEC) {
> - access_flags |= VM_EXEC;
> - fault_flags |= FAULT_FLAG_INSTRUCTION;
> - }
> -
> - if (!(prm->perm & IOMMU_FAULT_PERM_PRIV))
> - fault_flags |= FAULT_FLAG_USER;
> -
> - if (access_flags & ~vma->vm_flags)
> - /* Access fault */
> - goto out_put_mm;
> -
> - ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL);
> - status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
> - IOMMU_PAGE_RESP_SUCCESS;
> -
> -out_put_mm:
> - mmap_read_unlock(mm);
> - mmput(mm);
> -
> - return status;
> -}
> -
Once the iopf_handle_single() is removed, the name of
iopf_handle_group() looks a little weired
and confused, does this group mean the iommu group (domain) ? while I
take some minutes to
look into the code, oh, means a batch / list / queue of iopfs , and
iopf_handle_group() becomes a
generic iopf_handler() .
Doe it make sense to revise the names of iopf_handle_group(),
iopf_complete_group, iopf_group in
this patch set ?
Thanks,
Ethan
> static void iopf_handle_group(struct work_struct *work)
> {
> struct iopf_group *group;
> + struct iommu_domain *domain;
> struct iopf_fault *iopf, *next;
> enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS;
>
> group = container_of(work, struct iopf_group, work);
> + domain = iommu_get_domain_for_dev_pasid(group->dev,
> + group->last_fault.fault.prm.pasid);
> + if (!domain || !domain->iopf_handler)
> + status = IOMMU_PAGE_RESP_INVALID;
>
> list_for_each_entry_safe(iopf, next, &group->faults, list) {
> /*
> @@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct *work)
> * faults in the group if there is an error.
> */
> if (status == IOMMU_PAGE_RESP_SUCCESS)
> - status = iopf_handle_single(iopf);
> + status = domain->iopf_handler(&iopf->fault,
> + domain->fault_data);
>
> if (!(iopf->fault.prm.flags &
> IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE))
--
"firm, enduring, strong, and long-lived"
Powered by blists - more mailing lists