lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1k4fmxtyk.fsf@fess.ebiederm.org>
Date:	Fri, 25 Mar 2011 23:43:31 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>
Cc:	<linux-kernel@...r.kernel.org>, Yinghai Lu <yhlu.kernel@...il.com>
Subject: [PATCH] x86_64: Fix page table building regression


Recently I had cause to enable PAGE_ALLOC_DEBUG and I discovered my
kdump kernel would not boot.  After some investigation it turns out that
in commit 80989ce064 "x86: clean up and and print out initial max_pfn_mapped"
that a limitation of the 32bit page table setup was unnecessarily
applied to the 64bit code.  The initial 64bit page table setup code is
careful to map in it's initial page table pages and unmap then when done
so they can live anywhere in memory, so we don't need to limit ourselves
to using pages that are already mapped into memory.

In my case I hit this because the first 512M was not usable by the
kdump kernel.

Allocating the page tables higher should improve the reliability of
kdump kernels.  As it stands today with the recommended 128M reserved
for a kdump kernel the area reserved for kdump kernels will frequently
be allocated above 512M, and the kdump kernels will only be able to
allocate it's page tables from the low 1M of RAM.  Strictly speaking
that memory is available but it is the one piece of memory that we don't
have a 100% guarantee there was not on-going DMA to before the kdump
kernel starts.

Allowing the page tables to not come from the low 512M also will allow
kernels built with DEBUG_PAGE_ALLOC to boot on systems with 256G of RAM.

Cc: stable@...nel.org
Signed-off-by: Eric W. Biederman <ebiederm@...stanetworks.com>
---
 arch/x86/mm/init.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 947f42a..52460a1 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -33,7 +33,7 @@ int direct_gbpages
 static void __init find_early_table_space(unsigned long end, int use_pse,
 					  int use_gbpages)
 {
-	unsigned long puds, pmds, ptes, tables, start;
+	unsigned long puds, pmds, ptes, tables, start, stop;
 	phys_addr_t base;
 
 	puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
@@ -74,11 +74,13 @@ static void __init find_early_table_space(unsigned long end, int use_pse,
 	 */
 #ifdef CONFIG_X86_32
 	start = 0x7000;
+	/* The 32bit kernel_physical_mapping_init is limited */
+	stop = max_pfn_mapped<<PAGE_SHIFT;
 #else
 	start = 0x8000;
+	stop = end;
 #endif
-	base = memblock_find_in_range(start, max_pfn_mapped<<PAGE_SHIFT,
-					tables, PAGE_SIZE);
+	base = memblock_find_in_range(start, stop, tables, PAGE_SIZE);
 	if (base == MEMBLOCK_ERROR)
 		panic("Cannot find space for the kernel page tables");
 
-- 
1.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ