lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1362768434-30525-1-git-send-email-david.vrabel@citrix.com>
Date:	Fri, 8 Mar 2013 18:47:14 +0000
From:	David Vrabel <david.vrabel@...rix.com>
To:	<linux-kernel@...r.kernel.org>
CC:	<x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
	Yinghai Lu <yinghai@...nel.org>,
	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	David Vrabel <david.vrabel@...rix.com>
Subject: [PATCH] x86,mm: fix init_mem_mapping() when the first memory chunk is small

In init_mem_mapping(), if the first chunk of memory that is mapped is
small, there will not be enough mapped pages to allocate page table
pages for the next (larger) chunk.

Estimate how many pages are used for the mappings so far and how many
are needed for a larger chunk, and only increase step_size if there
are enough free pages.

This fixes a boot failure on a system where the first chunk of memory
mapped only had 3 pages in it.

init_memory_mapping: [mem 0x00000000-0x000fffff]
init_memory_mapping: [mem 0x20d000000-0x20d002fff]
init_memory_mapping: [mem 0x20c000000-0x20cffffff]
Kernel panic - not syncing: alloc_low_page: can not alloc memory

Signed-off-by: David Vrabel <david.vrabel@...rix.com>
---
 arch/x86/mm/init.c |   21 +++++++++++++++------
 1 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 4903a03..0cc7afb 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -389,6 +389,12 @@ static unsigned long __init init_range_memory_mapping(
 	return mapped_ram_size;
 }
 
+/* Estimate of the number of pages needed to page 'size' bytes. */
+static unsigned long __init nr_pages_to_map(unsigned long size)
+{
+	return DIV_ROUND_UP(size, PMD_SIZE) + DIV_ROUND_UP(size, PUD_SIZE);
+}
+
 /* (PUD_SHIFT-PMD_SHIFT)/2 */
 #define STEP_SIZE_SHIFT 5
 void __init init_mem_mapping(void)
@@ -397,7 +403,7 @@ void __init init_mem_mapping(void)
 	unsigned long step_size;
 	unsigned long addr;
 	unsigned long mapped_ram_size = 0;
-	unsigned long new_mapped_ram_size;
+	unsigned long mapped_pages;
 
 	probe_page_size_mask();
 
@@ -427,14 +433,17 @@ void __init init_mem_mapping(void)
 				start = ISA_END_ADDRESS;
 		} else
 			start = ISA_END_ADDRESS;
-		new_mapped_ram_size = init_range_memory_mapping(start,
-							last_start);
+		mapped_ram_size += init_range_memory_mapping(start,
+							     last_start);
+		mapped_pages = mapped_ram_size >> PAGE_SHIFT;
 		last_start = start;
 		min_pfn_mapped = last_start >> PAGE_SHIFT;
-		/* only increase step_size after big range get mapped */
-		if (new_mapped_ram_size > mapped_ram_size)
+
+		/* Only increase step_size if there is enough mapped
+		   ram to map the larger block. */
+		if (nr_pages_to_map(step_size << STEP_SIZE_SHIFT)
+		    < mapped_pages - nr_pages_to_map(mapped_ram_size))
 			step_size <<= STEP_SIZE_SHIFT;
-		mapped_ram_size += new_mapped_ram_size;
 	}
 
 	if (real_end < end)
-- 
1.7.2.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ