[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1425518654-3403-3-git-send-email-mcgrof@do-not-panic.com>
Date: Wed, 4 Mar 2015 17:24:12 -0800
From: "Luis R. Rodriguez" <mcgrof@...not-panic.com>
To: gregkh@...uxfoundation.org, akpm@...ux-foundation.org,
tony@...mide.com, tglx@...utronix.de, mingo@...hat.com,
hpa@...or.com, jgross@...e.com, luto@...capital.net,
toshi.kani@...com, dave.hansen@...ux.intel.com, JBeulich@...e.com,
pavel@....cz, qiuxishi@...wei.com, david.vrabel@...rix.com,
bp@...e.de, vbabka@...e.cz, iamjoonsoo.kim@....com,
decui@...rosoft.com
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, julia.lawall@...6.fr,
"Luis R. Rodriguez" <mcgrof@...e.com>
Subject: [RFC v1 2/4] x86: mm: simplify enabling direct_gbpages
From: "Luis R. Rodriguez" <mcgrof@...e.com>
direct_gbpages can be force enabled as an early parameter
but not really have taken effect when DEBUG_PAGEALLOC
or KMEMCHECK is enabled. You can also enable direct_gbpages
right now if you have an x86_64 architecture but your CPU
doesn't really have support for this feature. In both cases
PG_LEVEL_1G won't actually be enabled but direct_gbpages is used
in other areas under the assumptions that PG_LEVEL_1G
was set. Fix this by putting together all requirements
which make this feature sensible to enable under, and only
enable both finally flipping on PG_LEVEL_1G and leaving
PG_LEVEL_1G set when this is true.
We only enable this feature then to be possible on sensible
builds defined by the new ENABLE_DIRECT_GBPAGES. If the
CPU has support for it you can either enable this by using
the DIRECT_GBPAGES option or using the early kernel parameter.
If a platform had support for this you can always force disable
it as well.
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Tony Lindgren <tony@...mide.com>
Cc: linux-kernel@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: x86@...nel.org
Cc: Juergen Gross <jgross@...e.com>
Cc: Andy Lutomirski <luto@...capital.net>
Cc: Toshi Kani <toshi.kani@...com>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Jan Beulich <JBeulich@...e.com>
Cc: Pavel Machek <pavel@....cz>
Cc: Xishi Qiu <qiuxishi@...wei.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Vrabel <david.vrabel@...rix.com>
Cc: Borislav Petkov <bp@...e.de>
Cc: Vlastimil Babka <vbabka@...e.cz>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Dexuan Cui <decui@...rosoft.com>
Signed-off-by: Luis R. Rodriguez <mcgrof@...e.com>
---
arch/x86/Kconfig | 18 +++++++++++++-----
arch/x86/mm/init.c | 17 +++++++++--------
arch/x86/mm/pageattr.c | 2 --
3 files changed, 22 insertions(+), 15 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fb8e8cd..f3fd260 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1300,14 +1300,22 @@ config ARCH_DMA_ADDR_T_64BIT
def_bool y
depends on X86_64 || HIGHMEM64G
+config ENABLE_DIRECT_GBPAGES
+ def_bool y
+ depends on X86_64 && !DEBUG_PAGEALLOC && !KMEMCHECK
+
config DIRECT_GBPAGES
bool "Enable 1GB pages for kernel pagetables" if EXPERT
default y
- depends on X86_64
- ---help---
- Allow the kernel linear mapping to use 1GB pages on CPUs that
- support it. This can improve the kernel's performance a tiny bit by
- reducing TLB pressure. If in doubt, say "Y".
+ depends on ENABLE_DIRECT_GBPAGES
+ ---help---
+ Enable by default the kernel linear mapping to use 1GB pages on CPUs
+ that support it. This can improve the kernel's performance a tiny bit
+ by reducing TLB pressure. If in doubt, say "Y". If you've disabled
+ option but your platform is capable of handling support for this
+ you can use the gbpages kernel parameter. Likewise if you've enabled
+ this but you'd like to force disable this option you can use the
+ nogbpages kernel parameter.
# Common NUMA Features
config NUMA
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index b880d06..8d375ba 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -131,16 +131,21 @@ void __init early_alloc_pgt_buf(void)
int after_bootmem;
+static int page_size_mask;
+
int direct_gbpages = IS_ENABLED(CONFIG_DIRECT_GBPAGES);
static void __init init_gbpages(void)
{
-#ifdef CONFIG_X86_64
- if (direct_gbpages && cpu_has_gbpages)
+ if (!IS_ENABLED(CONFIG_ENABLE_DIRECT_GBPAGES)) {
+ direct_gbpages = 0;
+ return;
+ }
+ if (direct_gbpages && cpu_has_gbpages) {
printk(KERN_INFO "Using GB pages for direct mapping\n");
- else
+ page_size_mask |= 1 << PG_LEVEL_1G;
+ } else
direct_gbpages = 0;
-#endif
}
struct map_range {
@@ -149,8 +154,6 @@ struct map_range {
unsigned page_size_mask;
};
-static int page_size_mask;
-
static void __init probe_page_size_mask(void)
{
init_gbpages();
@@ -161,8 +164,6 @@ static void __init probe_page_size_mask(void)
* This will simplify cpa(), which otherwise needs to support splitting
* large pages into small in interrupt context, etc.
*/
- if (direct_gbpages)
- page_size_mask |= 1 << PG_LEVEL_1G;
if (cpu_has_pse)
page_size_mask |= 1 << PG_LEVEL_2M;
#endif
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 536ea2f..070b7c2 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -81,11 +81,9 @@ void arch_report_meminfo(struct seq_file *m)
seq_printf(m, "DirectMap4M: %8lu kB\n",
direct_pages_count[PG_LEVEL_2M] << 12);
#endif
-#ifdef CONFIG_X86_64
if (direct_gbpages)
seq_printf(m, "DirectMap1G: %8lu kB\n",
direct_pages_count[PG_LEVEL_1G] << 20);
-#endif
}
#else
static inline void split_page_count(int level) { }
--
2.2.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists