lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 21 Aug 2018 01:16:23 +0000
From:   Bin Yang <bin.yang@...el.com>
To:     tglx@...utronix.de, mingo@...nel.org, hpa@...or.com,
        x86@...nel.org, linux-kernel@...r.kernel.org, peterz@...radead.org,
        dave.hansen@...el.com, mark.gross@...el.com, bin.yang@...el.com
Subject: [PATCH v3 2/5] x86/mm: avoid static_protection() checking if not whole large page attr change

The range check whether the address is aligned to the large page and
covers the full large page (1G or 2M) is obvious to do _before_
static_protection() check, because if the requested range does not fit
and has a different pgprot_val() then it will decide to split after the
check anyway.

The approach and some of the comments came from Thomas Gleixner's
email example for how to do this

Suggested-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Bin Yang <bin.yang@...el.com>
---
 arch/x86/mm/pageattr.c | 35 ++++++++++++++++-------------------
 1 file changed, 16 insertions(+), 19 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 68613fd..091f1d3 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -645,11 +645,21 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 	}
 
 	/*
+	 * If the requested address range is not aligned to the start of
+	 * the large page or does not cover the full range, split it up.
+	 * No matter what the static_protections() check below does, it
+	 * would anyway result in a split after doing all the check work
+	 * for nothing.
+	 */
+	addr = address & pmask;
+	if (address != addr || cpa->numpages != numpages)
+		goto out_unlock;
+
+	/*
 	 * We need to check the full range, whether
 	 * static_protection() requires a different pgprot for one of
 	 * the pages in the range we try to preserve:
 	 */
-	addr = address & pmask;
 	pfn = old_pfn;
 	for (i = 0; i < (psize >> PAGE_SHIFT); i++, addr += PAGE_SIZE, pfn++) {
 		pgprot_t chk_prot = static_protections(req_prot, addr, pfn);
@@ -659,24 +669,11 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 	}
 
 
-	/*
-	 * We need to change the attributes. Check, whether we can
-	 * change the large page in one go. We request a split, when
-	 * the address is not aligned and the number of pages is
-	 * smaller than the number of pages in the large page. Note
-	 * that we limited the number of possible pages already to
-	 * the number of pages in the large page.
-	 */
-	if (address == (address & pmask) && cpa->numpages == (psize >> PAGE_SHIFT)) {
-		/*
-		 * The address is aligned and the number of pages
-		 * covers the full page.
-		 */
-		new_pte = pfn_pte(old_pfn, new_prot);
-		__set_pmd_pte(kpte, address, new_pte);
-		cpa->flags |= CPA_FLUSHTLB;
-		do_split = 0;
-	}
+	/* All checks passed. Just change the large mapping entry */
+	new_pte = pfn_pte(old_pfn, new_prot);
+	__set_pmd_pte(kpte, address, new_pte);
+	cpa->flags |= CPA_FLUSHTLB;
+	do_split = 0;
 
 out_unlock:
 	spin_unlock(&pgd_lock);
-- 
2.7.4

Powered by blists - more mailing lists