[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180711124008.GF2070@MiWiFi-R3L-srv>
Date: Wed, 11 Jul 2018 20:40:08 +0800
From: Baoquan He <bhe@...hat.com>
To: Chao Fan <fanc.fnst@...fujitsu.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, x86@...nel.org,
yasu.isimatu@...il.com, keescook@...omium.org,
indou.takao@...fujitsu.com, caoj.fnst@...fujitsu.com,
douly.fnst@...fujitsu.com, mhocko@...e.com, vbabka@...e.cz,
mgorman@...hsingularity.net
Subject: Re: Bug report about KASLR and ZONE_MOVABLE
Please try this v3 patch:
>From 9850d3de9c02e570dc7572069a9749a8add4c4c7 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@...hat.com>
Date: Wed, 11 Jul 2018 20:31:51 +0800
Subject: [PATCH v3] mm, page_alloc: find movable zone after kernel text
In find_zone_movable_pfns_for_nodes(), when try to find the starting
PFN movable zone begins in each node, kernel text position is not
considered. KASLR may put kernel after which movable zone begins.
Fix it by finding movable zone after kernel text on that node.
Signed-off-by: Baoquan He <bhe@...hat.com>
---
mm/page_alloc.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1521100..390eb35 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6547,7 +6547,7 @@ static unsigned long __init early_calculate_totalpages(void)
static void __init find_zone_movable_pfns_for_nodes(void)
{
int i, nid;
- unsigned long usable_startpfn;
+ unsigned long usable_startpfn, real_startpfn;
unsigned long kernelcore_node, kernelcore_remaining;
/* save the state before borrow the nodemask */
nodemask_t saved_node_state = node_states[N_MEMORY];
@@ -6681,10 +6681,20 @@ static void __init find_zone_movable_pfns_for_nodes(void)
if (start_pfn >= end_pfn)
continue;
+ /*
+ * KASLR may put kernel near tail of node memory,
+ * start after kernel on that node to find PFN
+ * which zone begins.
+ */
+ if (pfn_to_nid(PFN_UP(_etext)) == i)
+ real_startpfn = max(usable_startpfn,
+ PFN_UP(_etext))
+ else
+ real_startpfn = usable_startpfn;
/* Account for what is only usable for kernelcore */
- if (start_pfn < usable_startpfn) {
+ if (start_pfn < real_startpfn) {
unsigned long kernel_pages;
- kernel_pages = min(end_pfn, usable_startpfn)
+ kernel_pages = min(end_pfn, real_startpfn)
- start_pfn;
kernelcore_remaining -= min(kernel_pages,
@@ -6693,7 +6703,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
required_kernelcore);
/* Continue if range is now fully accounted */
- if (end_pfn <= usable_startpfn) {
+ if (end_pfn <= real_startpfn) {
/*
* Push zone_movable_pfn to the end so
@@ -6704,7 +6714,7 @@ static void __init find_zone_movable_pfns_for_nodes(void)
zone_movable_pfn[nid] = end_pfn;
continue;
}
- start_pfn = usable_startpfn;
+ start_pfn = real_startpfn;
}
/*
--
2.1.0
Powered by blists - more mailing lists