[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191211163511.gjju2s3yy4sus44w@cantor>
Date: Wed, 11 Dec 2019 09:35:11 -0700
From: Jerry Snitselaar <jsnitsel@...hat.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>
Cc: Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>, ashok.raj@...el.com,
jacob.jun.pan@...el.com, kevin.tian@...el.com,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH 1/1] iommu/vt-d: Fix dmar pte read access not set error
On Wed Dec 11 19, Lu Baolu wrote:
>If the default DMA domain of a group doesn't fit a device, it
>will still sit in the group but use a private identity domain.
>When map/unmap/iova_to_phys come through iommu API, the driver
>should still serve them, otherwise, other devices in the same
>group will be impacted. Since identity domain has been mapped
>with the whole available memory space and RMRRs, we don't need
>to worry about the impact on it.
>
Does this pose any potential issues with the reverse case where the
group has a default identity domain, and the first device fits that,
but a later device in the group needs dma and gets a private dma
domain?
>Link: https://www.spinics.net/lists/iommu/msg40416.html
>Cc: Jerry Snitselaar <jsnitsel@...hat.com>
>Reported-by: Jerry Snitselaar <jsnitsel@...hat.com>
>Fixes: 942067f1b6b97 ("iommu/vt-d: Identify default domains replaced with private")
>Cc: stable@...r.kernel.org # v5.3+
>Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
>---
> drivers/iommu/intel-iommu.c | 8 --------
> 1 file changed, 8 deletions(-)
>
>diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>index 0c8d81f56a30..b73bebea9148 100644
>--- a/drivers/iommu/intel-iommu.c
>+++ b/drivers/iommu/intel-iommu.c
>@@ -5478,9 +5478,6 @@ static int intel_iommu_map(struct iommu_domain *domain,
> int prot = 0;
> int ret;
>
>- if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>- return -EINVAL;
>-
> if (iommu_prot & IOMMU_READ)
> prot |= DMA_PTE_READ;
> if (iommu_prot & IOMMU_WRITE)
>@@ -5523,8 +5520,6 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain,
> /* Cope with horrid API which requires us to unmap more than the
> size argument if it happens to be a large-page mapping. */
> BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level));
>- if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>- return 0;
>
> if (size < VTD_PAGE_SIZE << level_to_offset_bits(level))
> size = VTD_PAGE_SIZE << level_to_offset_bits(level);
>@@ -5556,9 +5551,6 @@ static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
> int level = 0;
> u64 phys = 0;
>
>- if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>- return 0;
>-
> pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level);
> if (pte)
> phys = dma_pte_addr(pte);
>--
>2.17.1
>
Powered by blists - more mailing lists