lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-fb754f958f8e46202c1efd7f66d5b3db1208117d@git.kernel.org>
Date:	Wed, 10 Aug 2016 11:09:09 -0700
From:	tip-bot for Thomas Garnier <tipbot@...or.com>
To:	linux-tip-commits@...r.kernel.org
Cc:	bp@...e.de, dan.j.williams@...el.com, bhe@...hat.com,
	brgerst@...il.com, akpm@...ux-foundation.org, fabf@...net.be,
	jroedel@...e.de, toshi.kani@...com, mingo@...nel.org,
	peterz@...radead.org, rafael.j.wysocki@...el.com,
	borntraeger@...ibm.com, linux-kernel@...r.kernel.org,
	thgarnie@...gle.com, luto@...nel.org, tglx@...utronix.de,
	dvlasenk@...hat.com, msalter@...hat.com, dyoung@...hat.com,
	dave.hansen@...ux.intel.com, bp@...en8.de,
	aleksey.makarov@...aro.org, keescook@...omium.org,
	lv.zheng@...el.com, hpa@...or.com, torvalds@...ux-foundation.org,
	jpoimboe@...hat.com
Subject: [tip:x86/mm] x86/mm/KASLR: Increase BRK pages for KASLR memory
 randomization

Commit-ID:  fb754f958f8e46202c1efd7f66d5b3db1208117d
Gitweb:     http://git.kernel.org/tip/fb754f958f8e46202c1efd7f66d5b3db1208117d
Author:     Thomas Garnier <thgarnie@...gle.com>
AuthorDate: Tue, 9 Aug 2016 10:11:05 -0700
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 10 Aug 2016 14:45:19 +0200

x86/mm/KASLR: Increase BRK pages for KASLR memory randomization

Default implementation expects 6 pages maximum are needed for low page
allocations. If KASLR memory randomization is enabled, the worse case
of e820 layout would require 12 pages (no large pages). It is due to the
PUD level randomization and the variable e820 memory layout.

This bug was found while doing extensive testing of KASLR memory
randomization on different type of hardware.

Signed-off-by: Thomas Garnier <thgarnie@...gle.com>
Cc: Aleksey Makarov <aleksey.makarov@...aro.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Baoquan He <bhe@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Borislav Petkov <bp@...e.de>
Cc: Brian Gerst <brgerst@...il.com>
Cc: Christian Borntraeger <borntraeger@...ibm.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Dave Young <dyoung@...hat.com>
Cc: Denys Vlasenko <dvlasenk@...hat.com>
Cc: Fabian Frederick <fabf@...net.be>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Joerg Roedel <jroedel@...e.de>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Kees Cook <keescook@...omium.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Lv Zheng <lv.zheng@...el.com>
Cc: Mark Salter <msalter@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Rafael J . Wysocki <rafael.j.wysocki@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Toshi Kani <toshi.kani@...com>
Cc: kernel-hardening@...ts.openwall.com
Fixes: 021182e52fe0 ("Enable KASLR for physical mapping memory regions")
Link: http://lkml.kernel.org/r/1470762665-88032-2-git-send-email-thgarnie@google.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 arch/x86/mm/init.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 6209289..d28a2d7 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -122,8 +122,18 @@ __ref void *alloc_low_pages(unsigned int num)
 	return __va(pfn << PAGE_SHIFT);
 }
 
-/* need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS */
-#define INIT_PGT_BUF_SIZE	(6 * PAGE_SIZE)
+/*
+ * By default need 3 4k for initial PMD_SIZE,  3 4k for 0-ISA_END_ADDRESS.
+ * With KASLR memory randomization, depending on the machine e820 memory
+ * and the PUD alignment. We may need twice more pages when KASLR memory
+ * randomization is enabled.
+ */
+#ifndef CONFIG_RANDOMIZE_MEMORY
+#define INIT_PGD_PAGE_COUNT      6
+#else
+#define INIT_PGD_PAGE_COUNT      12
+#endif
+#define INIT_PGT_BUF_SIZE	(INIT_PGD_PAGE_COUNT * PAGE_SIZE)
 RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE);
 void  __init early_alloc_pgt_buf(void)
 {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ