lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1534814186-37067-4-git-send-email-bin.yang@intel.com>
Date:   Tue, 21 Aug 2018 01:16:24 +0000
From:   Bin Yang <bin.yang@...el.com>
To:     tglx@...utronix.de, mingo@...nel.org, hpa@...or.com,
        x86@...nel.org, linux-kernel@...r.kernel.org, peterz@...radead.org,
        dave.hansen@...el.com, mark.gross@...el.com, bin.yang@...el.com
Subject: [PATCH v3 3/5] x86/mm: add help function to check specific protection flags in range

Introduce the needs_static_protections() helper to check specific
protection flags in range. It calls static_protection() to check
whether any part of the address/len range is forced to change from 'prot'.

Suggested-by: Dave Hansen <dave.hansen@...el.com>
Signed-off-by: Bin Yang <bin.yang@...el.com>
---
 arch/x86/mm/pageattr.c | 36 +++++++++++++++++++++++++++---------
 1 file changed, 27 insertions(+), 9 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 091f1d3..f630eb4 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -367,6 +367,30 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address,
 }
 
 /*
+ * static_protections() "forces" page protections for some address
+ * ranges.  Return true if any part of the address/len range is forced
+ * to change from 'prot'.
+ */
+static inline bool
+needs_static_protections(pgprot_t prot, unsigned long address,
+		unsigned long len, unsigned long pfn)
+{
+	int i;
+
+	address &= PAGE_MASK;
+	len = PFN_ALIGN(len);
+	for (i = 0; i < (len >> PAGE_SHIFT); i++, address += PAGE_SIZE, pfn++) {
+		pgprot_t chk_prot = static_protections(prot, address, pfn);
+
+		if (pgprot_val(chk_prot) != pgprot_val(prot))
+			return true;
+	}
+
+	/* Does static_protections() demand a change ? */
+	return false;
+}
+
+/*
  * Lookup the page table entry for a virtual address in a specific pgd.
  * Return a pointer to the entry and the level of the mapping.
  */
@@ -556,7 +580,7 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 	unsigned long nextpage_addr, numpages, pmask, psize, addr, pfn, old_pfn;
 	pte_t new_pte, old_pte, *tmp;
 	pgprot_t old_prot, new_prot, req_prot;
-	int i, do_split = 1;
+	int do_split = 1;
 	enum pg_level level;
 
 	if (cpa->force_split)
@@ -660,14 +684,8 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 	 * static_protection() requires a different pgprot for one of
 	 * the pages in the range we try to preserve:
 	 */
-	pfn = old_pfn;
-	for (i = 0; i < (psize >> PAGE_SHIFT); i++, addr += PAGE_SIZE, pfn++) {
-		pgprot_t chk_prot = static_protections(req_prot, addr, pfn);
-
-		if (pgprot_val(chk_prot) != pgprot_val(new_prot))
-			goto out_unlock;
-	}
-
+	if (needs_static_protections(new_prot, addr, psize, old_pfn))
+		goto out_unlock;
 
 	/* All checks passed. Just change the large mapping entry */
 	new_pte = pfn_pte(old_pfn, new_prot);
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ