lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 15 Apr 2021 13:49:33 +0800
From:   Lu Baolu <baolu.lu@...ux.intel.com>
To:     "Longpeng(Mike)" <longpeng2@...wei.com>,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Cc:     baolu.lu@...ux.intel.com, David Woodhouse <dwmw2@...radead.org>,
        Nadav Amit <nadav.amit@...il.com>,
        Alex Williamson <alex.williamson@...hat.com>,
        Joerg Roedel <joro@...tes.org>,
        Kevin Tian <kevin.tian@...el.com>,
        Gonglei <arei.gonglei@...wei.com>, stable@...r.kernel.org
Subject: Re: [PATCH v2] iommu/vt-d: Force to flush iotlb before creating
 superpage

Hi Longpeng,

On 4/15/21 8:46 AM, Longpeng(Mike) wrote:
> The translation caches may preserve obsolete data when the
> mapping size is changed, suppose the following sequence which
> can reveal the problem with high probability.
> 
> 1.mmap(4GB,MAP_HUGETLB)
> 2.
>    while (1) {
>     (a)    DMA MAP   0,0xa0000
>     (b)    DMA UNMAP 0,0xa0000
>     (c)    DMA MAP   0,0xc0000000
>               * DMA read IOVA 0 may failure here (Not present)
>               * if the problem occurs.
>     (d)    DMA UNMAP 0,0xc0000000
>    }
> 
> The page table(only focus on IOVA 0) after (a) is:
>   PML4: 0x19db5c1003   entry:0xffff899bdcd2f000
>    PDPE: 0x1a1cacb003  entry:0xffff89b35b5c1000
>     PDE: 0x1a30a72003  entry:0xffff89b39cacb000
>      PTE: 0x21d200803  entry:0xffff89b3b0a72000
> 
> The page table after (b) is:
>   PML4: 0x19db5c1003   entry:0xffff899bdcd2f000
>    PDPE: 0x1a1cacb003  entry:0xffff89b35b5c1000
>     PDE: 0x1a30a72003  entry:0xffff89b39cacb000
>      PTE: 0x0          entry:0xffff89b3b0a72000
> 
> The page table after (c) is:
>   PML4: 0x19db5c1003   entry:0xffff899bdcd2f000
>    PDPE: 0x1a1cacb003  entry:0xffff89b35b5c1000
>     PDE: 0x21d200883   entry:0xffff89b39cacb000 (*)
> 
> Because the PDE entry after (b) is present, it won't be
> flushed even if the iommu driver flush cache when unmap,
> so the obsolete data may be preserved in cache, which
> would cause the wrong translation at end.
> 
> However, we can see the PDE entry is finally switch to
> 2M-superpage mapping, but it does not transform
> to 0x21d200883 directly:
> 
> 1. PDE: 0x1a30a72003
> 2. __domain_mapping
>       dma_pte_free_pagetable
>         Set the PDE entry to ZERO
>       Set the PDE entry to 0x21d200883
> 
> So we must flush the cache after the entry switch to ZERO
> to avoid the obsolete info be preserved.
> 
> Cc: David Woodhouse <dwmw2@...radead.org>
> Cc: Lu Baolu <baolu.lu@...ux.intel.com>
> Cc: Nadav Amit <nadav.amit@...il.com>
> Cc: Alex Williamson <alex.williamson@...hat.com>
> Cc: Joerg Roedel <joro@...tes.org>
> Cc: Kevin Tian <kevin.tian@...el.com>
> Cc: Gonglei (Arei) <arei.gonglei@...wei.com>
> 
> Fixes: 6491d4d02893 ("intel-iommu: Free old page tables before creating superpage")
> Cc: <stable@...r.kernel.org> # v3.0+
> Link: https://lore.kernel.org/linux-iommu/670baaf8-4ff8-4e84-4be3-030b95ab5a5e@huawei.com/
> Suggested-by: Lu Baolu <baolu.lu@...ux.intel.com>
> Signed-off-by: Longpeng(Mike) <longpeng2@...wei.com>
> ---
> v1 -> v2:
>    - add Joerg
>    - reconstruct the solution base on the Baolu's suggestion
> ---
>   drivers/iommu/intel/iommu.c | 52 +++++++++++++++++++++++++++++++++------------
>   1 file changed, 38 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index ee09323..881c9f2 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -2289,6 +2289,41 @@ static inline int hardware_largepage_caps(struct dmar_domain *domain,
>   	return level;
>   }
>   
> +/*
> + * Ensure that old small page tables are removed to make room for superpage(s).
> + * We're going to add new large pages, so make sure we don't remove their parent
> + * tables. The IOTLB/devTLBs should be flushed if any PDE/PTEs are cleared.
> + */
> +static void switch_to_super_page(struct dmar_domain *domain,
> +				 unsigned long start_pfn,
> +				 unsigned long end_pfn, int level)
> +{
> +	unsigned long lvl_pages = lvl_to_nr_pages(level);
> +	struct dma_pte *pte = NULL;
> +	int i;
> +
> +	while (start_pfn <= end_pfn) {
> +		if (!pte)
> +			pte = pfn_to_dma_pte(domain, start_pfn, &level);
> +
> +		if (dma_pte_present(pte)) {
> +			dma_pte_free_pagetable(domain, start_pfn,
> +					       start_pfn + lvl_pages - 1,
> +					       level + 1);
> +
> +			for_each_domain_iommu(i, domain)
> +				iommu_flush_iotlb_psi(g_iommus[i], domain,
> +						      start_pfn, lvl_pages,
> +						      0, 0);
> +		}
> +
> +		pte++;
> +		start_pfn += lvl_pages;
> +		if (first_pte_in_page(pte))
> +			pte = NULL;
> +	}
> +}
> +
>   static int
>   __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
>   		 unsigned long phys_pfn, unsigned long nr_pages, int prot)
> @@ -2329,22 +2364,11 @@ static inline int hardware_largepage_caps(struct dmar_domain *domain,
>   				return -ENOMEM;
>   			/* It is large page*/
>   			if (largepage_lvl > 1) {
> -				unsigned long nr_superpages, end_pfn;
> +				unsigned long end_pfn;
>   
>   				pteval |= DMA_PTE_LARGE_PAGE;
> -				lvl_pages = lvl_to_nr_pages(largepage_lvl);
> -
> -				nr_superpages = nr_pages / lvl_pages;
> -				end_pfn = iov_pfn + nr_superpages * lvl_pages - 1;
> -
> -				/*
> -				 * Ensure that old small page tables are
> -				 * removed to make room for superpage(s).
> -				 * We're adding new large pages, so make sure
> -				 * we don't remove their parent tables.
> -				 */
> -				dma_pte_free_pagetable(domain, iov_pfn, end_pfn,
> -						       largepage_lvl + 1);
> +				end_pfn = ((iov_pfn + nr_pages) & level_mask(largepage_lvl)) - 1;
> +				switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);
>   			} else {
>   				pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
>   			}
> 

Thank you!

Acked-by: Lu Baolu <baolu.lu@...ux.intel.com>

Best regards,
baolu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ