[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6f3bcad9-b9b3-b349-fdad-ce53a79a665b@linux.intel.com>
Date: Thu, 12 Dec 2019 10:12:53 +0800
From: Lu Baolu <baolu.lu@...ux.intel.com>
To: Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>, ashok.raj@...el.com,
jacob.jun.pan@...el.com, kevin.tian@...el.com,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Cc: baolu.lu@...ux.intel.com
Subject: Re: [PATCH 1/1] iommu/vt-d: Fix dmar pte read access not set error
Hi,
On 12/12/19 9:49 AM, Jerry Snitselaar wrote:
> On Wed Dec 11 19, Lu Baolu wrote:
>> If the default DMA domain of a group doesn't fit a device, it
>> will still sit in the group but use a private identity domain.
>> When map/unmap/iova_to_phys come through iommu API, the driver
>> should still serve them, otherwise, other devices in the same
>> group will be impacted. Since identity domain has been mapped
>> with the whole available memory space and RMRRs, we don't need
>> to worry about the impact on it.
>>
>> Link: https://www.spinics.net/lists/iommu/msg40416.html
>> Cc: Jerry Snitselaar <jsnitsel@...hat.com>
>> Reported-by: Jerry Snitselaar <jsnitsel@...hat.com>
>> Fixes: 942067f1b6b97 ("iommu/vt-d: Identify default domains replaced
>> with private")
>> Cc: stable@...r.kernel.org # v5.3+
>> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
>
> Reviewed-by: Jerry Snitselaar <jsnitsel@...hat.com>
Can you please try this fix and check whether it can fix your problem?
If it helps, do you mind adding a Tested-by?
Best regards,
baolu
>
>> ---
>> drivers/iommu/intel-iommu.c | 8 --------
>> 1 file changed, 8 deletions(-)
>>
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 0c8d81f56a30..b73bebea9148 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -5478,9 +5478,6 @@ static int intel_iommu_map(struct iommu_domain
>> *domain,
>> int prot = 0;
>> int ret;
>>
>> - if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>> - return -EINVAL;
>> -
>> if (iommu_prot & IOMMU_READ)
>> prot |= DMA_PTE_READ;
>> if (iommu_prot & IOMMU_WRITE)
>> @@ -5523,8 +5520,6 @@ static size_t intel_iommu_unmap(struct
>> iommu_domain *domain,
>> /* Cope with horrid API which requires us to unmap more than the
>> size argument if it happens to be a large-page mapping. */
>> BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level));
>> - if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>> - return 0;
>>
>> if (size < VTD_PAGE_SIZE << level_to_offset_bits(level))
>> size = VTD_PAGE_SIZE << level_to_offset_bits(level);
>> @@ -5556,9 +5551,6 @@ static phys_addr_t
>> intel_iommu_iova_to_phys(struct iommu_domain *domain,
>> int level = 0;
>> u64 phys = 0;
>>
>> - if (dmar_domain->flags & DOMAIN_FLAG_LOSE_CHILDREN)
>> - return 0;
>> -
>> pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level);
>> if (pte)
>> phys = dma_pte_addr(pte);
>> --
>> 2.17.1
>>
>
Powered by blists - more mailing lists