lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 21 Jun 2016 17:47:00 -0700
From:	Kees Cook <keescook@...omium.org>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Kees Cook <keescook@...omium.org>,
	Thomas Garnier <thgarnie@...gle.com>,
	Andy Lutomirski <luto@...nel.org>, x86@...nel.org,
	Borislav Petkov <bp@...e.de>, Baoquan He <bhe@...hat.com>,
	Yinghai Lu <yinghai@...nel.org>,
	Juergen Gross <jgross@...e.com>,
	Matt Fleming <matt@...eblueprint.co.uk>,
	Toshi Kani <toshi.kani@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dan Williams <dan.j.williams@...el.com>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Xiao Guangrong <guangrong.xiao@...ux.intel.com>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	Alexander Kuleshov <kuleshovmail@...il.com>,
	Alexander Popov <alpopov@...ecurity.com>,
	Dave Young <dyoung@...hat.com>, Joerg Roedel <jroedel@...e.de>,
	Lv Zheng <lv.zheng@...el.com>,
	Mark Salter <msalter@...hat.com>,
	Dmitry Vyukov <dvyukov@...gle.com>,
	Stephen Smalley <sds@...ho.nsa.gov>,
	Boris Ostrovsky <boris.ostrovsky@...cle.com>,
	Christian Borntraeger <borntraeger@...ibm.com>,
	Jan Beulich <JBeulich@...e.com>, linux-kernel@...r.kernel.org,
	Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
	kernel-hardening@...ts.openwall.com
Subject: [PATCH v7 3/9] x86/mm: PUD VA support for physical mapping (x86_64)

From: Thomas Garnier <thgarnie@...gle.com>

Minor change that allows early boot physical mapping of PUD level virtual
addresses. The current implementation expects the virtual address to be
PUD aligned. For KASLR memory randomization, we need to be able to
randomize the offset used on the PUD table.

It has no impact on current usage.

Signed-off-by: Thomas Garnier <thgarnie@...gle.com>
Signed-off-by: Kees Cook <keescook@...omium.org>
---
 arch/x86/mm/init_64.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 6714712bd5da..7bf1ddb54537 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -465,7 +465,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
 
 /*
  * Create PUD level page table mapping for physical addresses. The virtual
- * and physical address have to be aligned at this level.
+ * and physical address do not have to be aligned at this level. KASLR can
+ * randomize virtual addresses up to this level.
  * It returns the last physical address mapped.
  */
 static unsigned long __meminit
@@ -474,14 +475,18 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 {
 	unsigned long pages = 0, paddr_next;
 	unsigned long paddr_last = paddr_end;
-	int i = pud_index(paddr);
+	unsigned long vaddr = (unsigned long)__va(paddr);
+	int i = pud_index(vaddr);
 
 	for (; i < PTRS_PER_PUD; i++, paddr = paddr_next) {
-		pud_t *pud = pud_page + pud_index(paddr);
+		pud_t *pud;
 		pmd_t *pmd;
 		pgprot_t prot = PAGE_KERNEL;
 
+		vaddr = (unsigned long)__va(paddr);
+		pud = pud_page + pud_index(vaddr);
 		paddr_next = (paddr & PUD_MASK) + PUD_SIZE;
+
 		if (paddr >= paddr_end) {
 			if (!after_bootmem &&
 			    !e820_any_mapped(paddr & PUD_MASK, paddr_next,
@@ -551,7 +556,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 
 /*
  * Create page table mapping for the physical memory for specific physical
- * addresses. The virtual and physical addresses have to be aligned on PUD level
+ * addresses. The virtual and physical addresses have to be aligned on PMD level
  * down. It returns the last physical address mapped.
  */
 unsigned long __meminit
-- 
2.7.4

Powered by blists - more mailing lists