[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190507053826.31622-71-sashal@kernel.org>
Date: Tue, 7 May 2019 01:38:00 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Michal Hocko <mhocko@...e.com>,
Robert Shteynfeld <robert.shteynfeld@...il.com>,
stable@...nel.org, Linus Torvalds <torvalds@...ux-foundation.org>,
Sasha Levin <alexander.levin@...rosoft.com>, linux-mm@...ck.org
Subject: [PATCH AUTOSEL 4.14 71/95] Revert "mm, memory_hotplug: initialize struct pages for the full memory section"
From: Michal Hocko <mhocko@...e.com>
[ Upstream commit 4aa9fc2a435abe95a1e8d7f8c7b3d6356514b37a ]
This reverts commit 2830bf6f05fb3e05bc4743274b806c821807a684.
The underlying assumption that one sparse section belongs into a single
numa node doesn't hold really. Robert Shteynfeld has reported a boot
failure. The boot log was not captured but his memory layout is as
follows:
Early memory node ranges
node 1: [mem 0x0000000000001000-0x0000000000090fff]
node 1: [mem 0x0000000000100000-0x00000000dbdf8fff]
node 1: [mem 0x0000000100000000-0x0000001423ffffff]
node 0: [mem 0x0000001424000000-0x0000002023ffffff]
This means that node0 starts in the middle of a memory section which is
also in node1. memmap_init_zone tries to initialize padding of a
section even when it is outside of the given pfn range because there are
code paths (e.g. memory hotplug) which assume that the full worth of
memory section is always initialized.
In this particular case, though, such a range is already intialized and
most likely already managed by the page allocator. Scribbling over
those pages corrupts the internal state and likely blows up when any of
those pages gets used.
Reported-by: Robert Shteynfeld <robert.shteynfeld@...il.com>
Fixes: 2830bf6f05fb ("mm, memory_hotplug: initialize struct pages for the full memory section")
Cc: stable@...nel.org
Signed-off-by: Michal Hocko <mhocko@...e.com>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
---
mm/page_alloc.c | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 16c20d9e771f..923deb33bf34 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5348,18 +5348,6 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
__init_single_pfn(pfn, zone, nid);
}
}
-#ifdef CONFIG_SPARSEMEM
- /*
- * If the zone does not span the rest of the section then
- * we should at least initialize those pages. Otherwise we
- * could blow up on a poisoned page in some paths which depend
- * on full sections being initialized (e.g. memory hotplug).
- */
- while (end_pfn % PAGES_PER_SECTION) {
- __init_single_page(pfn_to_page(end_pfn), end_pfn, zone, nid);
- end_pfn++;
- }
-#endif
}
static void __meminit zone_init_free_lists(struct zone *zone)
--
2.20.1
Powered by blists - more mailing lists