[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20120813201749.162047394@linuxfoundation.org>
Date: Mon, 13 Aug 2012 13:19:07 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Greg KH <gregkh@...uxfoundation.org>,
torvalds@...ux-foundation.org, akpm@...ux-foundation.org,
alan@...rguk.ukuu.org.uk, Xishi Qiu <qiuxishi@...wei.com>,
Jiang Liu <liuj97@...il.com>, Tony Luck <tony.luck@...el.com>,
Yinghai Lu <yinghai@...nel.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
David Rientjes <rientjes@...gle.com>,
Keping Chen <chenkeping@...wei.com>
Subject: [ 31/82] mm: setup pageblock_order before its used by sparsemem
From: Greg KH <gregkh@...uxfoundation.org>
3.5-stable review patch. If anyone has any objections, please let me know.
------------------
From: Xishi Qiu <qiuxishi@...wei.com>
commit ca57df79d4f64e1a4886606af4289d40636189c5 upstream.
On architectures with CONFIG_HUGETLB_PAGE_SIZE_VARIABLE set, such as
Itanium, pageblock_order is a variable with default value of 0. It's set
to the right value by set_pageblock_order() in function
free_area_init_core().
But pageblock_order may be used by sparse_init() before free_area_init_core()
is called along path:
sparse_init()
->sparse_early_usemaps_alloc_node()
->usemap_size()
->SECTION_BLOCKFLAGS_BITS
->((1UL << (PFN_SECTION_SHIFT - pageblock_order)) *
NR_PAGEBLOCK_BITS)
The uninitialized pageblock_size will cause memory wasting because
usemap_size() returns a much bigger value then it's really needed.
For example, on an Itanium platform,
sparse_init() pageblock_order=0 usemap_size=24576
free_area_init_core() before pageblock_order=0, usemap_size=24576
free_area_init_core() after pageblock_order=12, usemap_size=8
That means 24K memory has been wasted for each section, so fix it by calling
set_pageblock_order() from sparse_init().
Signed-off-by: Xishi Qiu <qiuxishi@...wei.com>
Signed-off-by: Jiang Liu <liuj97@...il.com>
Cc: Tony Luck <tony.luck@...el.com>
Cc: Yinghai Lu <yinghai@...nel.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Keping Chen <chenkeping@...wei.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/internal.h | 2 ++
mm/page_alloc.c | 4 ++--
mm/sparse.c | 3 +++
3 files changed, 7 insertions(+), 2 deletions(-)
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -347,3 +347,5 @@ extern u32 hwpoison_filter_enable;
extern unsigned long vm_mmap_pgoff(struct file *, unsigned long,
unsigned long, unsigned long,
unsigned long, unsigned long);
+
+extern void set_pageblock_order(void);
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4301,7 +4301,7 @@ static inline void setup_usemap(struct p
#ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE
/* Initialise the number of pages represented by NR_PAGEBLOCK_BITS */
-static inline void __init set_pageblock_order(void)
+void __init set_pageblock_order(void)
{
unsigned int order;
@@ -4329,7 +4329,7 @@ static inline void __init set_pageblock_
* include/linux/pageblock-flags.h for the values of pageblock_order based on
* the kernel config
*/
-static inline void set_pageblock_order(void)
+void __init set_pageblock_order(void)
{
}
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -493,6 +493,9 @@ void __init sparse_init(void)
struct page **map_map;
#endif
+ /* Setup pageblock_order for HUGETLB_PAGE_SIZE_VARIABLE */
+ set_pageblock_order();
+
/*
* map is using big page (aka 2M in x86 64 bit)
* usemap is less one page (aka 24 bytes)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists