[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1243838091-28591-6-git-send-email-tj@kernel.org>
Date: Mon, 1 Jun 2009 15:34:51 +0900
From: Tejun Heo <tj@...nel.org>
To: JBeulich@...ell.com, andi@...stfloor.org, mingo@...e.hu,
hpa@...or.com, tglx@...utronix.de, linux-kernel@...r.kernel.org,
x86@...nel.org
Cc: Tejun Heo <tj@...nel.org>
Subject: [PATCH 5/5] x86: ensure percpu remap doesn't consume too much vmalloc space
On extreme configuration (e.g. 32bit 32-way NUMA machine), remap
percpu first chunk allocator can consume too much of vmalloc space.
Make it fall back to 4k allocator if the consumption goes over 20%.
[ Impact: add sanity check for remap percpu first chunk allocator ]
Signed-off-by: Tejun Heo <tj@...nel.org>
Reported-by: Jan Beulich <JBeulich@...ell.com>
Cc: Andi Kleen <andi@...stfloor.org>
Cc: Ingo Molnar <mingo@...e.hu>
---
arch/x86/kernel/setup_percpu.c | 18 +++++++++++++++---
1 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c
index 29be178..1f28574 100644
--- a/arch/x86/kernel/setup_percpu.c
+++ b/arch/x86/kernel/setup_percpu.c
@@ -169,9 +169,21 @@ static ssize_t __init setup_pcpu_remap(size_t static_size, bool chosen)
return -EINVAL;
}
- /* on non-NUMA, embedding is better */
- if (!chosen && !pcpu_need_numa())
- return -EINVAL;
+ if (!chosen) {
+ size_t vm_size = VMALLOC_END - VMALLOC_START;
+ size_t tot_size = num_possible_cpus() * PMD_SIZE;
+
+ /* on non-NUMA, embedding is better */
+ if (!pcpu_need_numa())
+ return -EINVAL;
+
+ /* don't consume more than 20% of vmalloc area */
+ if (tot_size > vm_size / 5) {
+ pr_info("PERCPU: too large chunk size %zuMB for "
+ "large page remap\n", tot_size >> 20);
+ return -EINVAL;
+ }
+ }
/*
* Currently supports only single page. Supporting multiple
--
1.6.0.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists