[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BN9PR11MB52762A33BC9F41AB424915688CE3A@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Fri, 25 Aug 2023 08:17:08 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>,
Joerg Roedel <joro@...tes.org>,
"Will Deacon" <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
"Jason Gunthorpe" <jgg@...pe.ca>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Nicolin Chen <nicolinc@...dia.com>
CC: "Liu, Yi L" <yi.l.liu@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v4 09/10] iommu: Make iommu_queue_iopf() more generic
> From: Lu Baolu <baolu.lu@...ux.intel.com>
> Sent: Friday, August 25, 2023 10:30 AM
>
> + if (fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)
> + domain = iommu_get_domain_for_dev_pasid(dev, fault-
> >prm.pasid, 0);
> + else
> + domain = iommu_get_domain_for_dev(dev);
> +
> + if (!domain || !domain->iopf_handler) {
> + dev_warn_ratelimited(dev,
> + "iopf from pasid %d received without handler
> installed\n",
"without domain attached or handler installed"
>
> +int iommu_sva_handle_iopf(struct iopf_group *group)
> +{
> + struct iommu_fault_param *fault_param = group->dev->iommu-
> >fault_param;
> +
> + INIT_WORK(&group->work, iopf_handler);
> + if (!queue_work(fault_param->queue->wq, &group->work))
> + return -EBUSY;
> +
> + return 0;
> +}
this function is generic so the name should not tie to 'sva'.
> +
> /**
> * iopf_queue_flush_dev - Ensure that all queued faults have been
> processed
> * @dev: the endpoint whose faults need to be flushed.
Presumably we also need a flush callback per domain given now
the use of workqueue is optional then flush_workqueue() might
not be sufficient.
>
> +static void assert_no_pending_iopf(struct device *dev, ioasid_t pasid)
> +{
> + struct iommu_fault_param *iopf_param = dev->iommu-
> >fault_param;
> + struct iopf_fault *iopf;
> +
> + if (!iopf_param)
> + return;
> +
> + mutex_lock(&iopf_param->lock);
> + list_for_each_entry(iopf, &iopf_param->partial, list) {
> + if (WARN_ON(iopf->fault.prm.pasid == pasid))
> + break;
> + }
partial list is protected by dev_iommu lock.
> +
> + list_for_each_entry(iopf, &iopf_param->faults, list) {
> + if (WARN_ON(iopf->fault.prm.pasid == pasid))
> + break;
> + }
> + mutex_unlock(&iopf_param->lock);
> +}
> +
> void iommu_detach_device(struct iommu_domain *domain, struct device
> *dev)
> {
> struct iommu_group *group;
> @@ -1959,6 +1980,7 @@ void iommu_detach_device(struct iommu_domain
> *domain, struct device *dev)
> if (!group)
> return;
>
> + assert_no_pending_iopf(dev, IOMMU_NO_PASID);
> mutex_lock(&group->mutex);
> if (WARN_ON(domain != group->domain) ||
> WARN_ON(list_count_nodes(&group->devices) != 1))
> @@ -3269,6 +3291,7 @@ void iommu_detach_device_pasid(struct
> iommu_domain *domain, struct device *dev,
> {
> struct iommu_group *group = iommu_group_get(dev);
>
> + assert_no_pending_iopf(dev, pasid);
this doesn't look correct. A sane driver will stop triggering new
page request before calling detach but there are still pending ones
not drained until iopf_queue_flush_dev() called by
ops->remove_dev_pasid().
then this check will cause false warning.
> mutex_lock(&group->mutex);
> __iommu_remove_group_pasid(group, pasid);
> WARN_ON(xa_erase(&group->pasid_array, pasid) != domain);
> --
> 2.34.1
Powered by blists - more mailing lists