lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Jun 2012 22:58:50 +0900
From:	"Kim, Jong-Sung" <neidhard.kim@....com>
To:	"'Minchan Kim'" <minchan@...nel.org>,
	"'Russell King'" <linux@....linux.org.uk>
Cc:	"'Nicolas Pitre'" <nico@...aro.org>,
	"'Catalin Marinas'" <catalin.marinas@....com>,
	<linux-arm-kernel@...ts.infradead.org>,
	<linux-kernel@...r.kernel.org>,
	"'Chanho Min'" <chanho.min@....com>, <linux-mm@...ck.org>
Subject: RE: [PATCH] [RESEND] arm: limit memblock base address for early_pte_alloc

> From: Minchan Kim [mailto:minchan@...nel.org]
> Sent: Tuesday, June 05, 2012 4:12 PM
> 
> If we do arm_memblock_steal with a page which is not aligned with section
> size, panic can happen during boot by page fault in map_lowmem.
> 
> Detail:
> 
> 1) mdesc->reserve can steal a page which is allocated at 0x1ffff000 by
> memblock
>    which prefers tail pages of regions.
> 2) map_lowmem maps 0x00000000 - 0x1fe00000
> 3) map_lowmem try to map 0x1fe00000 but it's not aligned by section due to
1.
> 4) calling alloc_init_pte allocates a new page for new pte by
memblock_alloc
> 5) allocated memory for pte is 0x1fffe000 -> it's not mapped yet.
> 6) memset(ptr, 0, sz) in early_alloc_aligned got PANICed!

May I suggest another simple approach? The first continuous couples of
sections are always safely section-mapped inside alloc_init_section funtion.
So, by limiting memblock_alloc to the end of the first continuous couples of
sections at the start of map_lowmem, map_lowmem can safely memblock_alloc &
memset even if we have one or more section-unaligned memory regions. The
limit can be extended back to arm_lowmem_limit after the map_lowmem is done.

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index e5dad60..edf1e2d 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1094,6 +1094,11 @@ static void __init kmap_init(void)
 static void __init map_lowmem(void)
 {
 	struct memblock_region *reg;
+	phys_addr_t pmd_map_end;
+
+	pmd_map_end = (memblock.memory.regions[0].base +
+	               memblock.memory.regions[0].size) & PMD_MASK;
+	memblock_set_current_limit(pmd_map_end);
 
 	/* Map all the lowmem memory banks. */
 	for_each_memblock(memory, reg) {
@@ -1113,6 +1118,8 @@ static void __init map_lowmem(void)
 
 		create_mapping(&map);
 	}
+
+	memblock_set_current_limit(arm_lowmem_limit);
 }
 
 /*
@@ -1123,8 +1130,6 @@ void __init paging_init(struct machine_desc *mdesc)
 {
 	void *zero_page;
 
-	memblock_set_current_limit(arm_lowmem_limit);
-
 	build_mem_type_table();
 	prepare_page_table();
 	map_lowmem();


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ