[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Sat, 3 Mar 2012 19:27:27 +0800
From: Alex Shi <alex.shi@...el.com>
To: mingo@...hat.com
Cc: tglx@...utronix.de, hpa@...or.com, linux-kernel@...r.kernel.org,
x86@...nel.org, asit.k.mallick@...el.com
Subject: [PATCH] x86: correct internode cache alignment
Currently cache alignment among nodes in kernel is still 128 bytes on
NUMA machine, that get from old P4 processors. But now most of modern
CPU use the same size: 64 bytes from L1 to last level L3. so let's
remove the incorrect setting, and directly use the L1 cache size to do
SMP cache line alignment.
This patch save some memory space on kernel data. The System.map is
quite different with/without this change:
before patched after patched
...
000000000000b000 d tlb_vector_| 000000000000b000 d tlb_vector
000000000000b080 d cpu_loops_p| 000000000000b040 d cpu_loops_
...
Signed-off-by: Alex Shi <alex.shi@...el.com>
---
arch/x86/Kconfig.cpu | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
index 3c57033..6443c6f 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -303,7 +303,6 @@ config X86_GENERIC
config X86_INTERNODE_CACHE_SHIFT
int
default "12" if X86_VSMP
- default "7" if NUMA
default X86_L1_CACHE_SHIFT
config X86_CMPXCHG
--
1.6.3.3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists