lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20111216194209.864233769@clark.kroah.org>
Date:	Fri, 16 Dec 2011 11:40:26 -0800
From:	Greg KH <gregkh@...e.de>
To:	linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc:	torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
	alan@...rguk.ukuu.org.uk, Allen Kay <allen.m.kay@...el.com>,
	David Woodhouse <David.Woodhouse@...el.com>,
	Youquan Song <youquan.song@...el.com>
Subject: [19/45] intel-iommu: fix superpage support in pfn_to_dma_pte()

3.0-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Allen Kay <allen.m.kay@...el.com>

commit 4399c8bf2b9093696fa8160d79712e7346989c46 upstream.

If target_level == 0, current code breaks out of the while-loop if
SUPERPAGE bit is set. We should also break out if PTE is not present.
If we don't do this, KVM calls to iommu_iova_to_phys() will cause
pfn_to_dma_pte() to create mapping for 4KiB pages.

Signed-off-by: Allen Kay <allen.m.kay@...el.com>
Signed-off-by: David Woodhouse <David.Woodhouse@...el.com>
Signed-off-by: Youquan Song <youquan.song@...el.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...e.de>

---
 drivers/pci/intel-iommu.c |   17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

--- a/drivers/pci/intel-iommu.c
+++ b/drivers/pci/intel-iommu.c
@@ -307,6 +307,11 @@ static inline bool dma_pte_present(struc
 	return (pte->val & 3) != 0;
 }
 
+static inline bool dma_pte_superpage(struct dma_pte *pte)
+{
+	return (pte->val & (1 << 7));
+}
+
 static inline int first_pte_in_page(struct dma_pte *pte)
 {
 	return !((unsigned long)pte & ~VTD_PAGE_MASK);
@@ -732,29 +737,23 @@ out:
 }
 
 static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
-				      unsigned long pfn, int large_level)
+				      unsigned long pfn, int target_level)
 {
 	int addr_width = agaw_to_width(domain->agaw) - VTD_PAGE_SHIFT;
 	struct dma_pte *parent, *pte = NULL;
 	int level = agaw_to_level(domain->agaw);
-	int offset, target_level;
+	int offset;
 
 	BUG_ON(!domain->pgd);
 	BUG_ON(addr_width < BITS_PER_LONG && pfn >> addr_width);
 	parent = domain->pgd;
 
-	/* Search pte */
-	if (!large_level)
-		target_level = 1;
-	else
-		target_level = large_level;
-
 	while (level > 0) {
 		void *tmp_page;
 
 		offset = pfn_level_offset(pfn, level);
 		pte = &parent[offset];
-		if (!large_level && (pte->val & DMA_PTE_LARGE_PAGE))
+		if (!target_level && (dma_pte_superpage(pte) || !dma_pte_present(pte)))
 			break;
 		if (level == target_level)
 			break;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ