[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1390590670-25901-3-git-send-email-yinghai@kernel.org>
Date: Fri, 24 Jan 2014 11:11:09 -0800
From: Yinghai Lu <yinghai@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>
Cc: Dave Hansen <dave.hansen@...el.com>,
Santosh Shilimkar <santosh.shilimkar@...com>,
linux-kernel@...r.kernel.org, Yinghai Lu <yinghai@...nel.org>
Subject: [PATCH 3/3] memblock: Don't align size silent in memblock_virt_alloc()
In original __alloc_memory_core_early() for bootmem wrapper, we do not
align size silently.
We should not do that, as later free with old size will leave some
range not freed.
It's obvious that code is copied from memblock_base_nid(), and that code
is wrong for the same reason.
Also remove that in memblock_alloc_base.
Signed-off-by: Yinghai Lu <yinghai@...nel.org>
---
mm/memblock.c | 6 ------
1 file changed, 6 deletions(-)
Index: linux-2.6/mm/memblock.c
===================================================================
--- linux-2.6.orig/mm/memblock.c
+++ linux-2.6/mm/memblock.c
@@ -981,9 +981,6 @@ static phys_addr_t __init memblock_alloc
if (!align)
align = SMP_CACHE_BYTES;
- /* align @size to avoid excessive fragmentation on reserved array */
- size = round_up(size, align);
-
found = memblock_find_in_range_node(size, align, 0, max_addr, nid);
if (found && !memblock_reserve(found, size))
return found;
@@ -1077,9 +1074,6 @@ static void * __init memblock_virt_alloc
if (!align)
align = SMP_CACHE_BYTES;
- /* align @size to avoid excessive fragmentation on reserved array */
- size = round_up(size, align);
-
again:
alloc = memblock_find_in_range_node(size, align, min_addr, max_addr,
nid);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists