[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240819104219.8057-1-LeoLiu-oc@zhaoxin.com>
Date: Mon, 19 Aug 2024 18:42:19 +0800
From: LeoLiu-oc <LeoLiu-oc@...oxin.com>
To: <joro@...tes.org>, <will@...nel.org>, <robin.murphy@....com>,
<iommu@...ts.linux.dev>, <linux-kernel@...r.kernel.org>
CC: <CobeChen@...oxin.com>, <TimGuo@...oxin.com>, <TonyWWang-oc@...oxin.com>,
<leoliu-oc@...oxin.com>, <YeeLi@...oxin.com>, LeoLiuoc
<LeoLiu-oc@...oxin.com>
Subject: [PATCH] iommu/dma: Fix not fully traversing iova reservations bug
From: LeoLiuoc <LeoLiu-oc@...oxin.com>
For multiple devices in the same iommu group, sorted later devices (based
on Bus:Dev.Func) have the RMRR.
Sorted earlier device (without RMRR) initialized the iova domain causing
the sorted later device goto done_unlock.
Then, the sorted later device (with RMRR) cannot execute the
iova_reserve_iommu_regions to reserve the RMRR in the group's iova domain,
and other devices (in the same group) alloc iova in RMRR are permitted.
DMA iova addresses conflict with RMRR in this case.
There is a need to make sure all devices of the same group execute reserve
iova.
Substitute iova_reserve_iommu_regions with iova_reserve_pci_regions
(reserved PCI window)and iova_reserve_iommu_regions(reserved resv-region,
like RMRR and msi range). And then, goto iova_reserve_iommu_regions could
avoid the problem when if (iovad->start_pfn) is true.
Signed-off-by: LeoLiuoc <LeoLiu-oc@...oxin.com>
---
drivers/iommu/dma-iommu.c | 26 +++++++++++++++++++-------
1 file changed, 19 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7b1dfa0665df..9d40146a63e3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -559,6 +559,19 @@ static int iova_reserve_pci_windows(struct pci_dev *dev,
return 0;
}
+static int iova_reserve_pci_regions(struct device *dev,
+ struct iommu_domain *domain)
+{
+ struct iommu_dma_cookie *cookie = domain->iova_cookie;
+ struct iova_domain *iovad = &cookie->iovad;
+ int ret = 0;
+
+ if (dev_is_pci(dev))
+ ret = iova_reserve_pci_windows(to_pci_dev(dev), iovad);
+
+ return ret;
+}
+
static int iova_reserve_iommu_regions(struct device *dev,
struct iommu_domain *domain)
{
@@ -568,12 +581,6 @@ static int iova_reserve_iommu_regions(struct device *dev,
LIST_HEAD(resv_regions);
int ret = 0;
- if (dev_is_pci(dev)) {
- ret = iova_reserve_pci_windows(to_pci_dev(dev), iovad);
- if (ret)
- return ret;
- }
-
iommu_get_resv_regions(dev, &resv_regions);
list_for_each_entry(region, &resv_regions, list) {
unsigned long lo, hi;
@@ -707,7 +714,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev
}
ret = 0;
- goto done_unlock;
+ goto iova_reserve_iommu;
}
init_iova_domain(iovad, 1UL << order, base_pfn);
@@ -722,6 +729,11 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev
(!device_iommu_capable(dev, IOMMU_CAP_DEFERRED_FLUSH) || iommu_dma_init_fq(domain)))
domain->type = IOMMU_DOMAIN_DMA;
+ ret = iova_reserve_pci_regions(dev, domain);
+ if (ret)
+ goto done_unlock;
+
+iova_reserve_iommu:
ret = iova_reserve_iommu_regions(dev, domain);
done_unlock:
--
2.34.1
Powered by blists - more mailing lists