[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230724222538.3902553-4-jacob.jun.pan@linux.intel.com>
Date: Mon, 24 Jul 2023 15:25:33 -0700
From: Jacob Pan <jacob.jun.pan@...ux.intel.com>
To: LKML <linux-kernel@...r.kernel.org>, iommu@...ts.linux.dev,
"Lu Baolu" <baolu.lu@...ux.intel.com>,
Joerg Roedel <joro@...tes.org>,
Jean-Philippe Brucker <jean-philippe@...aro.com>,
"Robin Murphy" <robin.murphy@....com>
Cc: Jason Gunthorpe <jgg@...dia.com>, "Will Deacon" <will@...nel.org>,
"Tian, Kevin" <kevin.tian@...el.com>, Yi Liu <yi.l.liu@...el.com>,
"Yu, Fenghua" <fenghua.yu@...el.com>,
Tony Luck <tony.luck@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>
Subject: [PATCH v11 3/8] iommu/vt-d: Add domain_flush_pasid_iotlb()
From: Lu Baolu <baolu.lu@...ux.intel.com>
The VT-d spec requires to use PASID-based-IOTLB invalidation descriptor
to invalidate IOTLB and the paging-structure caches for a first-stage
page table. Add a generic helper to do this.
RID2PASID is used if the domain has been attached to a physical device,
otherwise real PASIDs that the domain has been attached to will be used.
The 'real' PASID attachment is handled in the subsequent change.
Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
Signed-off-by: Jacob Pan <jacob.jun.pan@...ux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@...el.com>
---
drivers/iommu/intel/iommu.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 89013a2913af..bb8316fec1aa 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1467,6 +1467,18 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
spin_unlock_irqrestore(&domain->lock, flags);
}
+static void domain_flush_pasid_iotlb(struct intel_iommu *iommu,
+ struct dmar_domain *domain, u64 addr,
+ unsigned long npages, bool ih)
+{
+ u16 did = domain_id_iommu(domain, iommu);
+ unsigned long flags;
+
+ spin_lock_irqsave(&domain->lock, flags);
+ qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, addr, npages, ih);
+ spin_unlock_irqrestore(&domain->lock, flags);
+}
+
static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
struct dmar_domain *domain,
unsigned long pfn, unsigned int pages,
@@ -1484,7 +1496,7 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
ih = 1 << 6;
if (domain->use_first_level) {
- qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, addr, pages, ih);
+ domain_flush_pasid_iotlb(iommu, domain, addr, pages, ih);
} else {
unsigned long bitmask = aligned_pages - 1;
@@ -1554,7 +1566,7 @@ static void intel_flush_iotlb_all(struct iommu_domain *domain)
u16 did = domain_id_iommu(dmar_domain, iommu);
if (dmar_domain->use_first_level)
- qi_flush_piotlb(iommu, did, IOMMU_NO_PASID, 0, -1, 0);
+ domain_flush_pasid_iotlb(iommu, dmar_domain, 0, -1, 0);
else
iommu->flush.flush_iotlb(iommu, did, 0, 0,
DMA_TLB_DSI_FLUSH);
--
2.25.1
Powered by blists - more mailing lists