[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1245210739-25699-8-git-send-email-tj@kernel.org>
Date: Wed, 17 Jun 2009 12:52:19 +0900
From: Tejun Heo <tj@...nel.org>
To: linux-kernel@...r.kernel.org, x86@...nel.org,
linux-arch@...r.kernel.org, mingo@...e.hu, JBeulich@...ell.com,
andi@...stfloor.org, hpa@...or.com, tglx@...utronix.de
Cc: Tejun Heo <tj@...nel.org>
Subject: [PATCH 7/7] x86: ensure percpu lpage doesn't consume too much vmalloc space
On extreme configuration (e.g. 32bit 32-way NUMA machine), lpage
percpu first chunk allocator can consume too much of vmalloc space.
Make it fall back to 4k allocator if the consumption goes over 20%.
[ Impact: add sanity check for lpage percpu first chunk allocator ]
Signed-off-by: Tejun Heo <tj@...nel.org>
Reported-by: Jan Beulich <JBeulich@...ell.com>
Cc: Andi Kleen <andi@...stfloor.org>
Cc: Ingo Molnar <mingo@...e.hu>
---
arch/x86/kernel/setup_percpu.c | 18 +++++++++++++++---
1 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c
index 165ebd5..29a3eef 100644
--- a/arch/x86/kernel/setup_percpu.c
+++ b/arch/x86/kernel/setup_percpu.c
@@ -163,9 +163,21 @@ static ssize_t __init setup_pcpu_lpage(size_t static_size, bool chosen)
int i, j;
ssize_t ret;
- /* on non-NUMA, embedding is better */
- if (!chosen && !pcpu_need_numa())
- return -EINVAL;
+ if (!chosen) {
+ size_t vm_size = VMALLOC_END - VMALLOC_START;
+ size_t tot_size = num_possible_cpus() * PMD_SIZE;
+
+ /* on non-NUMA, embedding is better */
+ if (!pcpu_need_numa())
+ return -EINVAL;
+
+ /* don't consume more than 20% of vmalloc area */
+ if (tot_size > vm_size / 5) {
+ pr_info("PERCPU: too large chunk size %zuMB for "
+ "large page remap\n", tot_size >> 20);
+ return -EINVAL;
+ }
+ }
/* need PSE */
if (!cpu_has_pse) {
--
1.6.0.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists