[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240221153437.GB13491@ziepe.ca>
Date: Wed, 21 Feb 2024 11:34:37 -0400
From: Jason Gunthorpe <jgg@...pe.ca>
To: Lu Baolu <baolu.lu@...ux.intel.com>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Kevin Tian <kevin.tian@...el.com>,
Huang Jiaqing <jiaqing.huang@...el.com>,
Ethan Zhao <haifeng.zhao@...ux.intel.com>, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] iommu/vt-d: Use device rbtree in iopf reporting
path
On Tue, Feb 20, 2024 at 02:59:39PM +0800, Lu Baolu wrote:
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index acfe27bd3448..6743fe6c7a36 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -4430,8 +4430,11 @@ static struct iommu_device *intel_iommu_probe_device(struct device *dev)
> static void intel_iommu_release_device(struct device *dev)
> {
> struct device_domain_info *info = dev_iommu_priv_get(dev);
> + struct intel_iommu *iommu = info->iommu;
>
> + mutex_lock(&iommu->iopf_lock);
> device_rbtree_remove(info);
> + mutex_unlock(&iommu->iopf_lock);
This seems like a pretty reasonable solution, maybe someday it can
become lockless.. This is a fast path right?
> @@ -691,21 +691,22 @@ static irqreturn_t prq_event_thread(int irq, void *d)
> if (unlikely(req->lpig && !req->rd_req && !req->wr_req))
> goto prq_advance;
>
> - pdev = pci_get_domain_bus_and_slot(iommu->segment,
> - PCI_BUS_NUM(req->rid),
> - req->rid & 0xff);
> /*
> * If prq is to be handled outside iommu driver via receiver of
> * the fault notifiers, we skip the page response here.
> */
> - if (!pdev)
> + mutex_lock(&iommu->iopf_lock);
> + dev = device_rbtree_find(iommu, req->rid);
> + if (!dev) {
> + mutex_unlock(&iommu->iopf_lock);
> goto bad_req;
> + }
Though now we have a mutex and a spinlock covering the same data
structure.. It could be optimized some more.
But maybe we should leave micro optimization aside for now.
Reviewed-by: Jason Gunthorpe <jgg@...dia.com>
Jason
Powered by blists - more mailing lists