lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed,  8 May 2019 16:04:17 +0800
From:   Baoquan He <bhe@...hat.com>
To:     linux-kernel@...r.kernel.org
Cc:     x86@...nel.org, tglx@...utronix.de, mingo@...nel.org, bp@...en8.de,
        hpa@...or.com, kirill.shutemov@...ux.intel.com,
        keescook@...omium.org, Baoquan He <bhe@...hat.com>
Subject: [PATCH v4] x86/mm/KASLR: Fix the size of vmemmap section

kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
to support the maximum amount of system RAM in 4-level paging mode, 64 TB.

However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
stamping into the following cpu_entry_area section, if KASLR puts vmemmap
very close to cpu_entry_area , and the actual area of vmemmap is much
bigger than 1 TB.

So here calculate the actual size of vmemmap region, then align up to 1 TB
boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
The current code reserves 0.5 PB for vmemmap in 5-level. In this new way,
the left space can be saved to join randomization to increase the entropy.

Signed-off-by: Baoquan He <bhe@...hat.com>
Acked-by: Kirill A. Shutemov <kirill@...ux.intel.com>
Reviewed-by: Kees Cook <keescook@...omium.org>
---
v3->v4:
  Fix the incorrect style of code comment;
  Add ack tags from Kirill and Kees.
v3 discussion is here:
  http://lkml.kernel.org/r/20190422091045.GB3584@localhost.localdomain

 arch/x86/mm/kaslr.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index dc3f058bdf9b..c0eedb85a92f 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
 } kaslr_regions[] = {
 	{ &page_offset_base, 0 },
 	{ &vmalloc_base, 0 },
-	{ &vmemmap_base, 1 },
+	{ &vmemmap_base, 0 },
 };
 
 /* Get size in bytes used by the memory region */
@@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
 	unsigned long rand, memory_tb;
 	struct rnd_state rand_state;
 	unsigned long remain_entropy;
+	unsigned long vmemmap_size;
 
 	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
 	vaddr = vaddr_start;
@@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
 	if (memory_tb < kaslr_regions[0].size_tb)
 		kaslr_regions[0].size_tb = memory_tb;
 
+	/*
+	 * Calculate how many TB vmemmap region needs, and aligned to
+	 * 1TB boundary.
+	 */
+	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
+		sizeof(struct page);
+	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
+
 	/* Calculate entropy available between regions */
 	remain_entropy = vaddr_end - vaddr_start;
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
-- 
2.17.2

Powered by blists - more mailing lists