[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A54CE4B.9080403@kernel.org>
Date: Wed, 08 Jul 2009 09:50:19 -0700
From: Yinghai Lu <yinghai@...nel.org>
To: Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
CC: Christoph Lameter <cl@...ux-foundation.org>, alex.shi@...el.com,
Mel Gorman <mel@....ul.ie>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
yanmin.zhang@...el.com, tim.c.chen@...el.com
Subject: [PATCH] x86: don't clear nodes_states[N_NORMAL_MEMORY] when numa
is not compiled in -v2
Alex found:
for x86_64 machine the specjbb2005 still can not run with hugepage
only happens when numa is not compiled in
the root cause: node_set_state will not set it back for us in that case
so don't clear that when numa is not select in config
v2: use node_clear_state instead
Reported-and-Tested-by: Alex Shi <alex.shi@...el.com>
Signed-off-by: Yinghai Lu <yinghai@...nel.org>
---
arch/x86/mm/init_64.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
Index: linux-2.6/arch/x86/mm/init_64.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/init_64.c
+++ linux-2.6/arch/x86/mm/init_64.c
@@ -598,8 +598,15 @@ void __init paging_init(void)
sparse_memory_present_with_active_regions(MAX_NUMNODES);
sparse_init();
- /* clear the default setting with node 0 */
- nodes_clear(node_states[N_NORMAL_MEMORY]);
+
+ /*
+ * clear the default setting with node 0
+ * note: don't use nodes_clear here, that is really clearing when
+ * numa support is not compiled in, and later node_set_state
+ * will not set it back.
+ */
+ node_clear_state(0, N_NORMAL_MEMORY);
+
free_area_init_nodes(max_zone_pfns);
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists