[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191229172726.487422938@linuxfoundation.org>
Date: Sun, 29 Dec 2019 18:27:02 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Jerry Snitselaar <jsnitsel@...hat.com>,
Lu Baolu <baolu.lu@...ux.intel.com>,
Joerg Roedel <jroedel@...e.de>
Subject: [PATCH 5.4 369/434] iommu/vt-d: Fix dmar pte read access not set error
From: Lu Baolu <baolu.lu@...ux.intel.com>
commit 75d18385394f56db76845d91a192532aba421875 upstream.
If the default DMA domain of a group doesn't fit a device, it
will still sit in the group but use a private identity domain.
When map/unmap/iova_to_phys come through iommu API, the driver
should still serve them, otherwise, other devices in the same
group will be impacted. Since identity domain has been mapped
with the whole available memory space and RMRRs, we don't need
to worry about the impact on it.
Link: https://www.spinics.net/lists/iommu/msg40416.html
Cc: Jerry Snitselaar <jsnitsel@...hat.com>
Reported-by: Jerry Snitselaar <jsnitsel@...hat.com>
Fixes: 942067f1b6b97 ("iommu/vt-d: Identify default domains replaced with private")
Cc: stable@...r.kernel.org # v5.3+
Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
Reviewed-by: Jerry Snitselaar <jsnitsel@...hat.com>
Tested-by: Jerry Snitselaar <jsnitsel@...hat.com>
Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/iommu/intel-iommu.c | 8 --------
1 file changed, 8 deletions(-)
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -5447,9 +5447,6 @@ static int intel_iommu_map(struct iommu_
int prot = 0;
int ret;
- if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
- return -EINVAL;
-
if (iommu_prot & IOMMU_READ)
prot |= DMA_PTE_READ;
if (iommu_prot & IOMMU_WRITE)
@@ -5492,8 +5489,6 @@ static size_t intel_iommu_unmap(struct i
/* Cope with horrid API which requires us to unmap more than the
size argument if it happens to be a large-page mapping. */
BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level));
- if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
- return 0;
if (size < VTD_PAGE_SIZE << level_to_offset_bits(level))
size = VTD_PAGE_SIZE << level_to_offset_bits(level);
@@ -5525,9 +5520,6 @@ static phys_addr_t intel_iommu_iova_to_p
int level = 0;
u64 phys = 0;
- if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
- return 0;
-
pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level);
if (pte)
phys = dma_pte_addr(pte);
Powered by blists - more mailing lists