[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <55E47B4D.1050103@gmail.com>
Date: Mon, 31 Aug 2015 11:05:33 -0500
From: Stuart Hayes <stuart.w.hayes@...il.com>
To: tglx@...utronix.de, mingo@...hat.com,
"H. Peter Anvin" <hpa@...or.com>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, prarit@...hat.com
Subject: Fwd: [PATCH] x86: Use larger chunks in mtrr_cleanup
Increase the range of chunk sizes tried in mtrr_cleanup() so it is able
to map large memory configs into MTRRs.
Currently, mtrr_cleanup() will fail with large memory configurations,
because it limits chunk_size to 2GB, which means that each MTRR can only
cover 2GB of memory. With a memory size of, say, 256GB, and ten variable
MTRRs (such as some recent Intel CPUs have), it is not possible to set up
the MTRRs to cover all of memory.
Signed-off-by: Stuart Hayes <stuart.w.hayes@...il.com>
---
--- linux-4.2-rc7/arch/x86/kernel/cpu/mtrr/cleanup.c.orig 2015-08-16 18:34:13.000000000 -0500
+++ linux-4.2-rc7/arch/x86/kernel/cpu/mtrr/cleanup.c 2015-08-27 12:29:51.908579247 -0500
@@ -517,10 +517,11 @@ struct mtrr_cleanup_result {
/*
* gran_size: 64K, 128K, 256K, 512K, 1M, 2M, ..., 2G
- * chunk size: gran_size, ..., 2G
- * so we need (1+16)*8
+ * chunk size: gran_size, ..., 2G, ..., 1<<address_bits
+ * (for 32 address bits, we need 136)
+ * (for 40 address bits, we need 264)
*/
-#define NUM_RESULT 136
+#define NUM_RESULT 264
#define PSHIFT (PAGE_SHIFT - 10)
static struct mtrr_cleanup_result __initdata result[NUM_RESULT];
@@ -751,7 +752,7 @@ int __init mtrr_cleanup(unsigned address
memset(result, 0, sizeof(result));
for (gran_size = (1ULL<<16); gran_size < (1ULL<<32); gran_size <<= 1) {
- for (chunk_size = gran_size; chunk_size < (1ULL<<32);
+ for (chunk_size = gran_size; chunk_size < (1ULL<<address_bits);
chunk_size <<= 1) {
if (i >= NUM_RESULT)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists